WO2020063527A1 - 基于多特征检索和形变的人体发型生成方法 - Google Patents

基于多特征检索和形变的人体发型生成方法 Download PDF

Info

Publication number
WO2020063527A1
WO2020063527A1 PCT/CN2019/107263 CN2019107263W WO2020063527A1 WO 2020063527 A1 WO2020063527 A1 WO 2020063527A1 CN 2019107263 W CN2019107263 W CN 2019107263W WO 2020063527 A1 WO2020063527 A1 WO 2020063527A1
Authority
WO
WIPO (PCT)
Prior art keywords
hair
hairstyle
hairstyles
distance
feature
Prior art date
Application number
PCT/CN2019/107263
Other languages
English (en)
French (fr)
Inventor
蒋琪雷
马原曦
张迎梁
Original Assignee
叠境数字科技(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 叠境数字科技(上海)有限公司 filed Critical 叠境数字科技(上海)有限公司
Priority to US16/962,227 priority Critical patent/US10891511B1/en
Priority to KR1020207016099A priority patent/KR102154470B1/ko
Priority to GB2009174.0A priority patent/GB2581758B/en
Priority to JP2020533612A priority patent/JP6891351B2/ja
Publication of WO2020063527A1 publication Critical patent/WO2020063527A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • G06F16/24578Query processing with adaptation to user needs using ranking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/56Information retrieval; Database structures therefor; File system structures therefor of still image data having vectorial format
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5854Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using shape and object relationship
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/2163Partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the invention relates to the field of three-dimensional images, and in particular to a method for generating human hairstyle based on multi-feature retrieval and deformation.
  • the generation of hair models belongs to the three-dimensional head reconstruction technology. It is an important part of the virtual character image and one of the most important features of the virtual person. Generally, the head is usually divided into two parts, the face and the hair.
  • the current widely used method is to use a frontal photo and a side photo as the information source, extract the front and side facial and hair feature points of the character, generate a three-dimensional head model, and generate two based on the hair feature points.
  • Dimensional head texture is mapped to a three-dimensional head model. According to the feature points of the hair area, the Kun's surface is used to fit the hair area, the Kun's surface is deformed, and texture mapping is performed.
  • Modeling based on a single photo generally extracts useful prior knowledge from a three-dimensional face database, and then infers the corresponding three-dimensional model of the face in the photo. as well as:
  • WO2016 / CN107121 discloses a method, device and terminal for reconstructing a user's hair model, including: obtaining a face image of a reconstructed user's face; determining an image of a hair region thereof; and combining a hair region with a three-dimensional 3D hair model in a database Perform matching to obtain the 3D hair model closest to the hair region image; determine the 3D hair model closest to the hair region as the 3D hair model of the reconstructed user.
  • CN201680025609.1 Chinese patent application relates to a three-dimensional hair modeling method and device, including: determining a first coordinate transformation relationship between a 3D head model of a hair to be created and a preset reference head model, and determining a 3D head The second coordinate transformation relationship between the external model and the preset 3D hair template, based on the first coordinate transformation relationship and the second coordinate transformation relationship, register the 3D head model with the 3D hair template; the 3D hair template and the reference head The models match; when an error region is detected in the registered 3D hair template, the radial basis function RBF is used to deform the hair in the error region of the 3D hair template to correct the error region; the error region includes the uncorrected region in the 3D hair template.
  • Completely occlude the scalp region of the 3D head model or the root region of the 3D hair template occlude the non-scalp region of the 3D head model.
  • Chinese patent No. CN201310312500.4 relates to a method for automatically generating a three-dimensional avatar, including: a three-dimensional face database; collecting a three-dimensional hairstyle database; using a face detection algorithm to detect a human face for inputting a front face photo, and using an active shape model for positioning Face positive feature points; based on the 3D face database, input face photos and face feature point coordinates, a 3D face model is generated using the deformation model method; for the input face photos, Markov random field based Hair method to segment hair; to extract hair texture based on hair segmentation results; to obtain a final matching hair model; to combine a face model with a hair model. Avoid manually adding hairstyles, improve efficiency, and ensure high fidelity.
  • CN201410161576.6 discloses a device for generating virtual human hair, including: an obtaining unit obtains a front face photo of a human face; a first determining unit determines a three-dimensional head model and determines a hair template based on the obtained front face photo of the human face; And the second determining unit determines, according to the adaptation value of the hair template determined by the first determining unit, in a correspondence relationship between a preset standard adaptation value of the hair template and description information of the standard hair template, The description information of the standard hair template corresponding to the adaptation value of the hair template; the generating unit according to the description information of the standard hair template determined by the second determination unit and the three-dimensional head determined by the first determination unit Model to obtain an exclusive hair template suitable for the three-dimensional head model.
  • the present invention aims to provide a human hairstyle generation method based on multi-feature retrieval and deformation.
  • the technical solution adopted by the present invention includes the following steps:
  • the weights of several distances are mixed to obtain the dissimilarity score of each hairstyle in the hairstyle database. These scores are sorted and the smallest score is taken out, which is the required hair model.
  • step three the feature points of the input face are matched and aligned with the feature points of the standard face.
  • s is the scaling ratio
  • is the rotation angle
  • t is the translation displacement
  • step four first retrieve the hairstyle mask based on the Minkowski distance; then, perform weight overlap addition for salient features; second, retrieve the hairstyle based on the Hosdorf distance; finally, retrieve the hairstyle based on the hair flow information .
  • step four the mask of the hairstyle to be retrieved is defined as H, and the mask of one hairstyle in the hairstyle database is B i .
  • the corresponding Min's distance is:
  • Min distance can be compared with the current input hairstyle and all hairstyles in the database, and finally sorted from small to large to obtain the score ranking vector M of the corresponding hairstyle in the hairstyle database.
  • step 4 for the very significant features of the hairstyle, a correspondingly higher weight is given; for all the candidate hairstyles retrieved, the weight is increased by 25% based on the forehead portion;
  • step 4 define the hairstyle mask to be retrieved as H and the standard hairstyle of the hairstyle library as B.
  • the corresponding Horsdorf distance is:
  • step four the flow direction field of the hair is obtained by using a gradient-based method.
  • the horizontal gradient of the hair is obtained:
  • step five the hairstyle recognition and matching based on the basic block of the hair volume: using the labeled different types of hair data and using the deep learning network to build a model training to obtain a hair network;
  • step 6 the retrieval based on multi-feature fusion, that is, for M 2 , H, and L, assign weights a: b: c, and fuse the three vectors to obtain a comprehensive ranking vector F:
  • Sort F from small to large, and select the top N positions as candidate hairstyles
  • the straightness degree of hair curling is sorted, and the highest ranked candidate is selected as the candidate result of the last retrieval.
  • the present invention uses a single frontal face photo to retrieve a three-dimensional hair model most similar to the photo in a large three-dimensional hairstyle database through a retrieval database, avoiding manual modeling, and improving efficiency;
  • the model is deformed to a certain degree, so that the generated 3D hairstyle and the input picture are as similar as possible, ensuring a high degree of fidelity.
  • FIG. 1 is an operation explanatory diagram of obtaining a mask
  • FIG. 2 is a schematic diagram of a feature point labeled on a human face
  • FIG. 3 is a flowchart of obtaining key points of a human face
  • FIG. 5 is an operation flowchart of hairstyle identification and matching of a hair curl basic block
  • C CONV layer (Conv layer)
  • P Pooling layer (D): 2d dense block (2d dense block)
  • T 2d transition layer (2d transition layer);
  • 6a-d are schematic diagrams of hair style identification and matching of the basic block of the hair volume.
  • This embodiment proposes a method for generating human hairstyles based on multi-feature retrieval and deformation based on high-precision three-dimensional portrait generation of people.
  • the large-scale three-dimensional hairstyle database is retrieved according to the input photo and the photo is most suitable.
  • Similar three-dimensional hair model method and the retrieved model is deformed to a certain extent, so that the generated three-dimensional hairstyle and the input picture are as similar as possible, so as to obtain the three-dimensional hairstyle of the input portrait.
  • the first is the data pre-processing stage. It is necessary to render the front photos of all hairstyles in the 3D hairstyle library and the mask maps corresponding to the hair, and then determine the retrieval results by comparing the 2D images.
  • Step one segment the hairstyle.
  • the image In order to find the specific position of the hair in a single frontal face image, the image needs to be segmented first to obtain a shape mask for further comparison with the mask in the hairstyle database.
  • FIG. 1 is a diagram illustrating an operation for obtaining a mask; segmentation of a hairstyle can be realized manually or automatically. among them:
  • Step 2 Face feature point recognition and alignment.
  • the positions of the hairs are substantially the same.
  • the input portraits should be aligned with the feature points of the face.
  • the algorithm uses cascaded regression factors.
  • a series of calibrated face pictures need to be used as the training set.
  • This embodiment uses an open source data set, which contains about 2,000 landmark training data, and uses this data set to train a DCNN elementary facial feature point predictor.
  • the network uses the convolutional neural network structure as the basis for training.
  • an initial shape is generated, that is, an approximate feature point position is estimated, and then a gradient boosting algorithm (gradient boosting) is used to reduce the initial shape and verified ground truth. Sum of squared errors. The least square method is used to minimize the error, and the cascade regression factor of each stage is obtained.
  • S (t) represents the estimation of the current S state.
  • Each regressor r t ( ⁇ , ⁇ ) is analyzed in each cascade from the current state S (t) and the input photo I.
  • the formula is as follows:
  • the most critical part of the cascade is that its prediction is based on features, for example, it is calculated from the gray value of the pixel from I, and it is related to the current state.
  • each rt is trained using a regression tree learned by gradient boosting, and an error is minimized using a least square method.
  • t represents the cascade number
  • r t ( ⁇ , ⁇ ) represents the returner of the current stage.
  • the input parameters of the regressor are the updated feature points of the image I and the upper-level regressor.
  • the features used can be grayscale values or other.
  • Each regressor consists of many trees, and the parameters of each tree are trained according to the current shape and the coordinate difference of the verified ground truth and randomly selected pixel pairs. Referring to FIG. 3, in the process of learning Tree, ERT directly stores the updated value of the shape into a leaf node. After the initial position S passes through all the learned trees, the mean shape (meanshape) is added to all the passed leaf nodes to obtain the final face key point position.
  • Step 3 Align the image with the standard human face to get the corresponding hair area.
  • s is the scaling ratio
  • is the rotation angle
  • t is the translation displacement
  • the following uses the least squares method to solve the rotation, translation, and scaling matrices, so that the first vector is aligned to the point of the second vector as much as possible.
  • the two shape matrices are p and q, respectively. Each row of the matrix represents the x and y coordinates of a feature point. Assuming 68 feature point coordinates, then p ⁇ R 68 ⁇ 2 .
  • the objective function of the least square method is:
  • the warping function corresponding to the face alignment can be obtained, that is, the corresponding s, R, T.
  • Applying the obtained warping function to the cut-out hairstyle mask, an aligned hairstyle can be obtained.
  • Step 4 Calculate the Min's distance of the hair masks of all front human faces in the hair region and hairstyle database.
  • Min distance can be used to compare the shape similarity of the two hairstyle masks. Min distance is more to judge the area of non-overlapping areas and non-overlapping areas. The more, the greater the relative Min distance value.
  • the mask of the hairstyle to be retrieved is defined as H, and the mask of a hairstyle in the hairstyle database is B i .
  • the corresponding Min's distance is:
  • Min distance can be compared with the current input hairstyle and all hairstyles in the database, and finally sorted from small to large to obtain the score ranking vector M of the corresponding hairstyle in the hairstyle database.
  • a correspondingly higher weight is given to make the retrieved hairstyles as similar as possible in the bangs.
  • the weight will be increased based on the 25% of the forehead portion; assuming that the bangs area of the standard head is L, then after the face alignment, the L area of the input photo and the standard head L area Make comparisons-where inconsistencies, the weights are increased by 1.25 times, making each other less similar.
  • some hairstyles may have very thin and long braids on both sides, so this feature may not have a very important effect on the area of the mask overlap, but it is very important for people's perception.
  • Horsdorf distance is applied here so that the details of the hair can be retained accordingly. Horsdorf distance is actually measured by the difference between the two hairstyles, the most different.
  • the hairstyle mask to be retrieved is defined as H
  • the standard hairstyle of the hairstyle library is B.
  • the corresponding Horsdorf distance is:
  • the Min distance is compared between the current input hairstyle and all hairstyles in the database, and finally the corresponding ranking vector H is obtained by sorting from small to large.
  • the gradient-based method is first used to obtain the flow direction field of the hair; usually, the flow direction of the hair should be perpendicular to the gradient field of the hair, so for the input hairstyle photo I First, find the horizontal gradient of the hair:
  • step five hairstyle recognition and matching based on the basic block of hair volume: using a large amount of labeled different types of hair data (straight hair, curly hair, braided hair and small curly hair), using a deep learning network to build a model training HairNet.
  • the input hair is first sampled through a Gaussian pyramid into input images of different scales and standard images in a hairstyle library.
  • super hair (super pixel) segmentation is performed to obtain hair pieces of different sizes, and then these hair pieces are uniformly pulled up to obtain patches of the same size, which are imported into the hair network.
  • the type to which the maximum probability of each piece belongs is finally obtained.
  • the input hair and the candidate hairs in the hairstyle library are matched.
  • the specific method is to divide the hair into blocks and then perform one-to-one corresponding multi-scale different point calculation to obtain the deviation values of different candidate hairs.
  • Step 6 The retrieval based on multi-feature fusion includes:
  • the ranking M 2 can be obtained, and then the ranking H can be calculated based on the Hosdorf distance, and combined with the hair's
  • the flow direction is sorted L, weights a: b: c are assigned respectively, and the three vectors are fused to obtain a comprehensive sorting vector F.
  • Sort F from small to large, and select the top N positions as candidate hairstyles.
  • the previously trained HairNet is used to sort the straightness of hair curls, and the top ranked candidate is selected as the final search candidate R.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Library & Information Science (AREA)
  • Geometry (AREA)
  • Computational Linguistics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

基于多特征检索和形变的人体发型生成方法,包括:获取发型掩膜;识别人脸的特征点,和发型数据库中的匹配;将图像和标准人脸对齐,得到相应的头发区域;计算所述头发区域和发型数据库中所有正面人脸的头发掩膜的闵氏距离;并在从小到大排序后赋予相应的权重;训练深度学习网络,检测不同尺度下头发基础块的发型;取出最相似的头发图片。本发明利用单张正面人脸照片,通过检索数据库在大型的三维发型数据库检索与照片最相似的三维头发模型,避免手动建模,提高了效率,保证了较高的逼真度。

Description

基于多特征检索和形变的人体发型生成方法 技术领域
本发明涉及三维图像领域,具体地说是一种基于多特征检索和形变的人体发型生成方法。
背景技术
头发模型的生成属于三维头部重建技术,是虚拟人物形象中的重要部分,是虚拟人最重要的特征之一。通常,头像的通常分为面部和头发两个部分。
对于头发重建技术,目前广泛采用的方式是:以一张正面照和一张侧面照作为信息来源,提取人物正面、侧面的面部和头发特征点,生成三维头部模型,根据头发特征点生成二维头部纹理,映射到三维头部模型上,根据头发区域特征点,用昆氏曲面进行头发区域的拟合,将昆氏曲面形变,并进行纹理贴图。
而基于单张照片进行建模,一般从三维人脸数据库中提取有用的先验知识,然后去推测照片中人脸所对应的三维模型。以及:
WO2016/CN107121号文件公开了一种用户头发模型的重建方法、装置及终端,包括:获取被重建用户的人脸正视图像;确定其头发区域图像;将头发区域与发型数据库中的三维3D头发模型进行匹配,得到与所述头发区域图像最接近的3D头发模型;将与所述头发区域最接近的3D头发模型确定为被重建用户的3D头发模型。
CN201680025609.1号中国专利申请涉及一种三维头发建模方法及装置,包括:确定待创建头发的3D头部模型以及预设的参考头部模型之间的第一坐标变换关系,以及确定3D头部模型与预设的3D头发模板之间的第二坐标变换关系,基于第一坐标变换关系以及第二坐标变换关系将3D头部模型与3D头发模板进行配准;3D头发模板和参考头部模型相匹配;检测到配准后的3D头发模板存在错误区域时,使用径向基函数RBF对3D头发模板的错误区域中的头发进行形变,以矫正错误区域;错误区域包括3D头发模板中未完全遮挡3D头部模型的头皮层区域或者3D头发模板中的发根区域遮挡3D头部模型的非头皮层区域。
CN201310312500.4号中国专利涉及一种三维头像自动生成方法,包括:三维人脸库;收集三维发型库;对输入的正面人脸照片,使用人脸检测算法检测人脸,并使用主动形状模型定位人脸正面特征点;基于三维人脸库、输入的人脸 照片和人脸特征点坐标,用形变模型方法生成三维人脸模型;对输入的正面人脸照片,用基于马尔科夫随机场的头发方法分割头发;根据头发分割结果,提取头发纹理;获得最终匹配的头发模型;将人脸模型与头发模型合成。避免手工添加发型,提高效率,能够保证较高的逼真度。
CN201410161576.6公开了一种虚拟人头发生成的装置,包括:获取单元获取人脸正面照;第一确定单元根据所述获取的所述人脸正面照,确定三维头部模型,并确定头发模板的适配值;第二确定单元根据所述第一确定单元确定的所述头发模板的适配值,在预置的头发模板标准适配值和标准头发模板的描述信息的对应关系中,确定所述头发模板的适配值对应的标准头发模板的描述信息;生成单元根据所述第二确定单元确定的所述标准头发模板的描述信息和所述第一确定单元确定的所述三维头部模型,得到适用于所述三维头部模型的专属头发模板。在重建照片上人物头发时,只需要一张人物正面照,而且不需要采集头发特征点。
发明内容
本发明为解决现有的问题,旨在提供一种基于多特征检索和形变的人体发型生成方法。
为了达到上述目的,本发明采用的技术方案包括如下步骤:
1)基于单张图片的发型自动分割得到其对应的发型掩膜。
2)使用人脸特征点识别算法,识别出输入图片人脸的特征点,并且根据这些特征点和发型数据库的标准人脸的特征点进行匹配,求出对应的翘曲函数。
3)使用得到的翘曲函数,将输入图片的人脸和标准人脸对齐,从而得到相应对齐的头发区域。
4)为了得到形状区域的相似性,将对齐后的待检索头发的掩膜与发型库中所有发型的正面的掩膜计算相应的闵可夫斯基距离,把得到的距离从小到大排序后赋予相应的权重。
5)为了保留头发的细节特征,通过霍斯道夫距离对于头发的细节相似度进行计算,重复4)的步骤进行赋权。结合4)的权重对于匹配的发型进行排序,取出前十位最相似的发型。
6)计算十个最相似头发的流向场,与待检测发型做匹配运算,得出五个更为相似的发型。
7)训练深度学习网络,检测不同尺度下头发基础块的发型,这里分为直发, 卷发,小卷发,编发四种基本发型。然后对待检测头发图片与五个候选头发做多尺度下的直方图匹配,获得不同的匹配得分。
按照几个个距离的权重混合,得到了发型数据库每一种发型的不相似得分,将这些得分排序、取出最小的分数,即为所需要的头发模型。
步骤三中,输入人脸的特征点与标准的人脸的特征点匹配对齐,
即求解一个二维的仿射变换:
Figure PCTCN2019107263-appb-000001
上式中,s是缩放比例,θ是旋转角度,t代表平移的位移,其中R是一个正交矩阵。
步骤四中,首先基于闵可夫斯基距离,对发型掩膜进行检索;然后,针对显著特征,进行权重叠加;其次,基于霍斯道夫距离的对发型再次检索;最后,基于头发流向信息的发型检索。
步骤四中,定义待检索发型的掩膜为H,发型数据库中的一个发型的掩膜为B i。相对应的闵氏距离则为:
Figure PCTCN2019107263-appb-000002
k为把掩膜拉成一位向量后的下标。P为闵氏距离的参数,这里可以取p为2。通过以上公式,可以用当前输入发型与数据库中所有的发型做闵氏距离的比较,最后从小到大排序,得到发型库相应发型的分数排名向量M。
步骤四中,对于发型的非常显著的特征,赋予相应更高的权重;对于检索到的所有候选发型,会基于额头部分25%的权重增加;
设定在标准头部刘海区域为L,在通过人脸对齐之后,在输入照片的的L区域,和标准头部L区域做比对——不一致的地方,权重按照1.25倍增加;将这些显著性区域和之前的闵科夫斯基距离相加,排序,得到改进的闵科夫斯基距离向量M 2
步骤四中,定义待检索的发型掩膜为H,发型库的标准发型为B。相对应的霍 斯道夫距离为:
Figure PCTCN2019107263-appb-000003
其中,sup代表上确界,inf代表下确界;
通过以上公式对于当前输入发型与数据库中所有的发型作闵氏距离的比较,最后从小到大排序得到相应的排名向量H。
步骤四中,用基于梯度的方法求取头发的流向场;对于输入的发型照片I,首先求取头发的横向梯度:
d x(i,j)=[I(i+1,j)-I(i-1,j)]/2
再求取头发的纵向梯度:
d y(i,j)=[I(i,j+1)-I(i,j-1)]/2
那么头发的流向场C满足:
[C x,C y]·[d x,d y] T=0
带入可求得流向场C;将C的相似度作为衡量标准加入到排序中,得到排序向量L。
步骤五中,基于头发卷基础块的发型识别与匹配:利用标注好的不同类型的头发数据利用深度学习网络构建模型训练得到头发网络;
将输入头发通过高斯金字塔采样出不同不同尺度的输入图像和发型库中的标准图像;
通过对头发部分做超像素分割,再对这些头发块做一致性拉升,得到大小相同的小片;将小片导入头发网络中。
步骤六中,基于多特征融合的检索,即对于M 2、H和L,分别赋权重a:b:c,将三个向量融合得到综合排序向量F:
F=aM 2+bH+cL
将F从小到大排序,选出前N位作为候选发型;
在这N个候选发型中,对头发卷直相似程度进行排序,选出排位最靠前的作为最后检索的候选结果。
和现有技术相比,本发明利用单张正面人脸照片,通过检索数据库在大型的三维发型数据库检索与照片最相似的三维头发模型,避免手动建模,提高了效率; 并且对于检索到的模型进行一定程度的变形,使得生成的三维发型和输入图片尽可能的相似,保证了较高的逼真度。
附图说明
图1为掩膜获取的操作说明图;
图2为人脸标注特征点的示意图;
图3为获得人脸关键点的流程图;
图4为不同类型头发数据的参考图;
图5为头发卷基础块的发型识别与匹配的操作流程图;
图中,C:CONV层(Conv layer),P:汇集层(Pooling layer),D:2d稠密块(2d Dense Block),T:2d过渡层(2d Transition layer);
图6a-d为头发卷基础块的发型识别与匹配的示意图。
具体实施方式
现结合附图对本发明作进一步地说明。
本实施例针对高精度人物三维肖像生成,提出了一种基于多特征检索和形变的人体发型生成方法,通过输入单张人物正面照片,根据输入照片在大型的三维发型数据库检索与所述照片最相似的三维头发模型的方法,并且对于检索到的模型进行一定程度的变形,使得生成的三维发型和输入图片尽可能的相似,从而得出输入人物肖像的三维发型。
首先是数据预处理阶段,需要渲染三维发型库所有发型的正面照片和头发对应的掩膜图,然后通过二维图像的比对来确定检索结果。
步骤一,对发型进行分割。为了找到单张正面人脸图像中头发的具体位置,首先需要对图像进行分割,从而得到形状的掩膜(mask),以进一步与发型数据库中的掩膜相比对。参见图1,图1展示的是获取掩膜的操作说明图;发型的分割既可以手动实现,也可以自动实现。其中:
通过手动实现:可以使用PS或者AE等支持抠图的软件,手动将头发区域框选出来,得到头发的掩膜。
通过自动实现:可以手工制作大量的数据集,包括单张包含头发的正面照片和与之对应的发型掩膜;通过训练深度神经网络,进行头发的自动分割。
步骤二,人脸特征点识别和对齐。
对了使得头发所在的位置大体相同,在检索之前首先要将输入的人物肖像通过人脸的特征点对齐。
首先,参见图2,本实施例在标准的人脸上标注了68个特征点。
然后,使用ERT级联回归算法进行人脸特征点的检测,该算法使用级联回归因子,首先需要使用一系列标定好的人脸图片作为训练集。本实施例采用开源的数据集,大概包含两千张标定好landmark的训练数据,使用该数据集训练一个基于DCNN初级的脸部特征点预测器。网络采用卷积神经网络结构作为训练的基础。当获得一张图片后,生成一个初始形状(initial shape),即先估计一个大致的特征点位置,然后采用梯度提升算法(gradient boosting)减小初始形状和核定过的真实数据(ground truth)的平方误差总和。用最小二乘法来最小化误差,得到每一级的级联回归因子。核心公式如下图所示:S (t)代表当前S状态的估计。每一个回归器r t(·,·),在每一个级联里会从当前状态S (t)和输入照片I中分析得到,公式如下:
S (t+1)=s (t)+r t(I,S (t))
级联器最关键的部分是其的预测是基于特征的,比如说从像素的灰度值从I计算得来,和当前状态相关。
本实施例使用梯度提升学习的回归树训练每个rt,使用最小二乘法最小化误差。t表示级联序号,r t(·,·)表示当前级的回归器。回归器的输入参数为图像I和上一级回归器更新后的特征点,采用的特征可以是灰度值或者其它。每个回归器由很多棵树组成,每棵树参数是根据当前形状(current shape)和核定过的真实数据(ground truth)的坐标差和随机挑选的像素对训练得到的。参见图3,ERT在学习Tree的过程中,直接将形状的更新值存入叶子结点(leaf node)。初始位置S在通过所有学习到的Tree后,均值形状(meanshape)加上所有经过的叶子节点的,即可得到最终的人脸关键点位置。
步骤三,将图像和标准人脸对齐,得到相应的头发区域。
对于人脸的对齐,需要将输入人脸的68个检测出来的特征点与标准的人脸的68个特征点匹配对齐。即求解一个二维的仿射变换:
Figure PCTCN2019107263-appb-000004
上式中,s是缩放比例,θ是旋转角度,t代表平移的位移,其中R是一个正交矩阵。
R TR=I
下面使用最小二乘法求解旋转、平移、缩放矩阵,使得第一个向量尽可能对齐到第二个向量的点。两个形状矩阵分别为p,q。矩阵的每一行代表一个特征点的x,y坐标,假设有68个特征点坐标,则p∈R 68×2。最小二乘法的目标函数为:
Figure PCTCN2019107263-appb-000005
其中就是p i矩阵的第i行。写成矩阵形式:
argmin s,R,T||sRp T+T-q T|| F
                      R TR=I
||·|| F代表F范数(Frobenius)。
此方程有解析解,首先平移的影响可以被消除掉,通过对每一个点减去其他所有68个点的均值:
Figure PCTCN2019107263-appb-000006
然后每一个点减去对应的数据均值:
Figure PCTCN2019107263-appb-000007
进一步,可以消减缩放影响通过对已经处理过的点除以均方根距离s:
Figure PCTCN2019107263-appb-000008
经过以上处理之后,可以求解出该问题的解析解:
M=BA T
svd(M)=U∑V T
R=UV T
这样便可求解出R。通过以上求解,能够求得到对应人脸对齐的翘曲函数,也就是对应的s,R,T。将得到的翘曲函数应用到抠出的发型掩膜上,可以得到对齐后的发型。
步骤四,计算所述头发区域和发型数据库中所有正面人脸的头发掩膜的闵氏距离。
首先,基于闵可夫斯基距离的发型掩膜检索,有:
将对应的两张对齐后的的发型掩膜取出,通过闵氏距离可以用于比较两个发型掩膜的形状形似程度,闵氏距离更多的是判断不重合区域的面积,不重合的区域越多,相对的闵氏距离值越大。则定义待检索发型的掩膜为H,发型数据库中的一个发型的掩膜为B i。相对应的闵氏距离则为:
Figure PCTCN2019107263-appb-000009
k为把掩膜拉成一位向量后的下标。P为闵氏距离的参数,这里可以取p为2。通过以上公式,可以用当前输入发型与数据库中所有的发型做闵氏距离的比较,最后从小到大排序,得到发型库相应发型的分数排名向量M。
然后,显著特征的权重叠加。有:
对于头发刘海等非常显著的特征,赋予相应更高的权重,使得检索到的发型在刘海部分尽量相似。对于检索到的所有候选发型,会基于额头部分25%的权重增加;假设在标准头部刘海区域为L,那么在通过人脸对齐之后,在输入照片的的L区域,和标准头部L区域做比对——不一致的地方,权重按照1.25倍增加,使得彼此相似度更低。将这些显著性区域和之前的闵科夫斯基距离相加,排序,便得到改进的闵科夫斯基距离向量M 2
其次,基于霍斯道夫距离的发型检索,有:
在检索头发匹配程度时,发型的细节是非常重要的指标。
比如说有的发型可能两边有很细很长的辫子,那么这个特征,在掩膜重合面积上可能没有很重要的影响,但是对于人的观感来讲却十分重要。
霍斯道夫距离应用在这里就是为了使得头发的细节能够得到相应的保留。霍斯道夫距离衡量的实际是两个发型,最不一样的地方的差距大小。此处仍定义待检索的发型掩膜为H,发型库的标准发型为B。相对应的霍斯道夫距离为:
Figure PCTCN2019107263-appb-000010
其中,sup代表上确界,inf代表下确界。同样,通过以上公式对于当前输入发型与数据库中所有的发型做闵氏距离的比较,最后从小到大排序得到相应的排名向量H。
最后,基于头发流向信息的发型检索。
为了使得检索到的发型从流向,卷直程度上尽量相似,首先用基于梯度的方法求取头发的流向场;通常,头发的流向应与头发的梯度场相垂直,所以对于输入的发型照片I,首先求取头发的横向梯度:
d x(i,j)=[I(i+1,j)-I(i-1,j)]/2
再求取头发的纵向梯度:
d y(i,j)=[I(i,j+1)-I(i,j-1)]/2
那么头发的流向场C满足:
[C x,C y]·[d x,d y] T=0
带入可求得流向场C。
不同头发有不同的流向场,在获取到流向信息之后,就可以基于每一个头发上的像素点的流向与候选头发对应点的流向作对比,最终得到相似度信息C。将C的相似度也作为衡量标准加入到排序中。得到排序向量L。
参见图4,步骤五,基于头发卷基础块的发型识别与匹配:利用大量标注好的四种不同类型的头发数据(直发、卷发、编发和细小卷发),利用深度学习网络构建模型训练得到头发网络(HairNet)。
参见图5和图6a-d,首先将输入头发通过高斯金字塔采样出不同不同尺度的输入图像和发型库中的标准图像。接下来,通过对头发部分做超像素(super pixel)的分割得到大小不一的头发块,再对这些头发块做一致性拉升,得到大小相同的小片(patch),将这些小片导入头发网络中,最终得到每一个小片最大概率所属的类型。
在获得候选头发以及输入头发的不同基础块的头发类别之后,对输入头发以及发型库中的候选头发做匹配。具体的方法是,对头发分区块,然后进行一一对应的多尺度不同点计算,从而获得不同候选头发的偏差值。
步骤六,基于多特征融合的检索,有:
对于输入人物正面照片,经过发型分割和对齐后,基于以上的闵科夫斯基距离和头发的显著特征融合则可以得到排序M 2,再根据霍斯道夫距离计算得到排序H,再结合头发的流向得到排序L,分别赋权重a:b:c,将三个向量融合得到综合排序向量F。
F=aM 2+bH+cL
将F从小到大排序,选出前N位作为候选发型。在这N个候选发型中,使用之前训练好的HairNet对头发卷直相似程度进行排序,选出排位最靠前的作为最后检索的候选结果R。
上面结合附图及实施例描述了本发明的实施方式,实施例并不构成对本发明的限制,本领域内熟练的技术人员可依据需要做出调整,在所附权利要求的范围内做出各种变形或修改均在保护范围内。

Claims (10)

  1. 一种基于多特征检索和形变的人体发型生成方法,其特征在于包括如下步骤:
    步骤一,获取单张正面人脸图像中的发型掩膜;
    步骤二,识别人脸的特征点,和发型数据库中的人脸进行匹配;
    步骤三,将图像和标准人脸对齐,得到相应的头发区域;
    步骤四,计算所述头发区域和发型数据库中所有正面人脸的头发掩膜的闵氏距离;并在从小到大排序后赋予相应的权重;
    步骤五,计算若干最相似头发的流向场,与待检测发型做匹配运算;
    步骤六,训练深度学习网络,检测不同尺度下头发基础块的发型;对待检测头发图片与若干候选头发做多尺度下的直方图匹配,获得不同的匹配得分;
    最后,取出最相似的头发图片。
  2. 根据权利要求1所述的一种基于多特征检索和形变的人体发型生成方法,其特征在于:通过霍氏距离对头发的细节相似度进行计算,重复步骤四的步骤进行赋权;结合步骤四的权重对于匹配的发型进行排序,取出前十位最相似的发型。
  3. 根据权利要求1或2所述的一种基于多特征检索和形变的人体发型生成方法,其特征在于:步骤三中,输入人脸的特征点与标准的人脸的特征点匹配对齐,即求解一个二维的仿射变换:
    Figure PCTCN2019107263-appb-100001
    Figure PCTCN2019107263-appb-100002
    上式中,s是缩放比例,θ是旋转角度,t代表平移的位移,其中R是一个正交矩阵。
  4. 根据权利要求2所述的一种基于多特征检索和形变的人体发型生成方法,其特征在于:步骤四中,首先基于闵氏距离,对发型掩膜进行检索;然后,针对显著特征,进行权重叠加;其次,基于霍斯道夫距离的对发型再次检索;最后,基于头发流向信息的发型检索。
  5. 根据权利要求4所述的一种基于多特征检索和形变的人体发型生成方法,其特征在于:步骤四中,定义待检索发型的掩膜为H,发型数据库中的一个发型的 掩膜为B i。相对应的闵氏距离则为:
    Figure PCTCN2019107263-appb-100003
    k为把掩膜拉成一位向量后的下标。P为闵氏距离的参数,这里可以取p为2。通过以上公式,可以用当前输入发型与数据库中所有的发型做闵氏距离的比较,最后从小到大排序,得到发型库相应发型的分数排名向量M。
  6. 根据权利要求5所述的一种基于多特征检索和形变的人体发型生成方法,其特征在于:步骤四中,对于发型的非常显著的特征,赋予相应更高的权重;对于检索到的所有候选发型,会基于额头部分25%的权重增加;
    设定在标准头部刘海区域为L,在通过人脸对齐之后,在输入照片的的L区域,和标准头部L区域做比对——不一致的地方,权重按照1.25倍增加;将这些显著性区域和之前的闵科夫斯基距离相加,排序,得到改进的闵科夫斯基距离向量M 2
  7. 根据权利要求6所述的一种基于多特征检索和形变的人体发型生成方法,其特征在于:步骤四中,定义待检索的发型掩膜为H,发型库的标准发型为B。相对应的霍斯道夫距离为:
    Figure PCTCN2019107263-appb-100004
    其中,sup代表上确界,inf代表下确界;
    通过以上公式对于当前输入发型与数据库中所有的发型作闵氏距离的比较,最后从小到大排序得到相应的排名向量H。
  8. 根据权利要求7所述的一种基于多特征检索和形变的人体发型生成方法,其特征在于:步骤四中,用基于梯度的方法求取头发的流向场;对于输入的发型照片I,首先求取头发的横向梯度:
    d x(i,j)=[I(i+1,j)-I(i-1,j)]/2
    再求取头发的纵向梯度:
    d y(i,j)=[I(i,j+1)-I(i,j-1)]/2
    那么头发的流向场C满足:
    [C x,C y]·[d x,d y] T=0
    带入可求得流向场C;将C的相似度作为衡量标准加入到排序中,得到排序向量L。
  9. 根据权利要求8所述的一种基于多特征检索和形变的人体发型生成方法,其特征在于:步骤五中,基于头发卷基础块的发型识别与匹配:利用标注好的不同类型的头发数据利用深度学习网络构建模型训练得到头发网络;
    将输入头发通过高斯金字塔采样出不同不同尺度的输入图像和发型库中的标准图像;
    通过对头发部分做超像素分割,再对这些头发块做一致性拉升,得到大小相同的小片;将小片导入头发网络中。
  10. 根据权利要求9所述的一种基于多特征检索和形变的人体发型生成方法,其特征在于:步骤六中,基于多特征融合的检索,即对于M 2、H和L,分别赋权重a:b:c,将三个向量融合得到综合排序向量F:
    F=aM 2+bH+cL
    将F从小到大排序,选出前N位作为候选发型;
    在这N个候选发型中,对头发卷直相似程度进行排序,选出排位最靠前的作为最后检索的候选结果。
PCT/CN2019/107263 2018-09-30 2019-09-23 基于多特征检索和形变的人体发型生成方法 WO2020063527A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US16/962,227 US10891511B1 (en) 2018-09-30 2019-09-23 Human hairstyle generation method based on multi-feature retrieval and deformation
KR1020207016099A KR102154470B1 (ko) 2018-09-30 2019-09-23 다중 특징 검색 및 변형에 기반한 3차원 인체 헤어스타일 생성 방법
GB2009174.0A GB2581758B (en) 2018-09-30 2019-09-23 Human hair style generation method based on multi-feature search and deformation
JP2020533612A JP6891351B2 (ja) 2018-09-30 2019-09-23 多特徴検索と変形に基づく人体髪型の生成方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811165895.9 2018-09-30
CN201811165895.9A CN109408653B (zh) 2018-09-30 2018-09-30 基于多特征检索和形变的人体发型生成方法

Publications (1)

Publication Number Publication Date
WO2020063527A1 true WO2020063527A1 (zh) 2020-04-02

Family

ID=65466814

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/107263 WO2020063527A1 (zh) 2018-09-30 2019-09-23 基于多特征检索和形变的人体发型生成方法

Country Status (6)

Country Link
US (1) US10891511B1 (zh)
JP (1) JP6891351B2 (zh)
KR (1) KR102154470B1 (zh)
CN (1) CN109408653B (zh)
GB (1) GB2581758B (zh)
WO (1) WO2020063527A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819921A (zh) * 2020-11-30 2021-05-18 北京百度网讯科技有限公司 用于改变人物的发型的方法、装置、设备和存储介质
CN113269822A (zh) * 2021-05-21 2021-08-17 山东大学 用于3d打印的人物发型肖像重建方法及系统
WO2021256319A1 (ja) * 2020-06-15 2021-12-23 ソニーグループ株式会社 情報処理装置、情報処理方法および記録媒体

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109408653B (zh) * 2018-09-30 2022-01-28 叠境数字科技(上海)有限公司 基于多特征检索和形变的人体发型生成方法
CN110415339B (zh) * 2019-07-19 2021-07-13 清华大学 计算输入三维形体间的匹配关系的方法和装置
CN110738595B (zh) * 2019-09-30 2023-06-30 腾讯科技(深圳)有限公司 图片处理方法、装置和设备及计算机存储介质
CN110874419B (zh) * 2019-11-19 2022-03-29 山东浪潮科学研究院有限公司 一种人脸数据库快速检索技术
CN111192244B (zh) * 2019-12-25 2023-09-29 新绎健康科技有限公司 一种基于关键点确定舌部特征的方法及系统
WO2022047463A1 (en) * 2020-08-22 2022-03-03 Snap Inc. Cross-domain neural networks for synthesizing image with fake hair combined with real image
CN113762305B (zh) * 2020-11-27 2024-04-16 北京沃东天骏信息技术有限公司 一种脱发类型的确定方法和装置
CN112734633A (zh) * 2021-01-07 2021-04-30 京东方科技集团股份有限公司 虚拟发型的替换方法、电子设备及存储介质
KR102507460B1 (ko) * 2021-04-05 2023-03-07 고려대학교 산학협력단 카툰 배경 자동 생성 방법 및 그 장치
CN113112616A (zh) * 2021-04-06 2021-07-13 济南大学 一种基于改进可编辑的三维生成方法
CN113129347B (zh) * 2021-04-26 2023-12-12 南京大学 一种自监督单视图三维发丝模型重建方法及系统
CN113538455B (zh) * 2021-06-15 2023-12-12 聚好看科技股份有限公司 三维发型匹配方法及电子设备
KR102542705B1 (ko) * 2021-08-06 2023-06-15 국민대학교산학협력단 얼굴 추적 기반 행위 분류 방법 및 장치
CN113744286A (zh) * 2021-09-14 2021-12-03 Oppo广东移动通信有限公司 虚拟头发生成方法及装置、计算机可读介质和电子设备
FR3127602A1 (fr) * 2021-09-27 2023-03-31 Idemia Identity & Security France procédé de génération d’une image augmentée et dispositif associé
US12118779B1 (en) * 2021-09-30 2024-10-15 United Services Automobile Association (Usaa) System and method for assessing structural damage in occluded aerial images
CN114187633B (zh) * 2021-12-07 2023-06-16 北京百度网讯科技有限公司 图像处理方法及装置、图像生成模型的训练方法及装置
CN115311403B (zh) * 2022-08-26 2023-08-08 北京百度网讯科技有限公司 深度学习网络的训练方法、虚拟形象生成方法及装置
TWI818824B (zh) * 2022-12-07 2023-10-11 財團法人工業技術研究院 用於計算遮蔽人臉影像的人臉擺動方向的裝置及方法
CN116503924B (zh) * 2023-03-31 2024-01-26 广州翼拍联盟网络技术有限公司 人像头发边缘处理方法、装置、计算机设备及存储介质
CN116509118B (zh) * 2023-04-26 2024-08-20 深圳市华南英才科技有限公司 一种超高转速吹风机的控制方法及系统
CN117389676B (zh) * 2023-12-13 2024-02-13 成都白泽智汇科技有限公司 一种基于显示界面的智能发型适配展示方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0897680A2 (en) * 1997-08-12 1999-02-24 Shiseido Company, Ltd. Method for selecting suitable hairstyle and image-map for hairstyle
KR20050018921A (ko) * 2005-02-01 2005-02-28 황지현 헤어스타일 자동 합성 방법 및 시스템
CN103366400A (zh) * 2013-07-24 2013-10-23 深圳市华创振新科技发展有限公司 一种三维头像自动生成方法
CN106372652A (zh) * 2016-08-28 2017-02-01 乐视控股(北京)有限公司 发型识别方法及发型识别装置
CN108280397A (zh) * 2017-12-25 2018-07-13 西安电子科技大学 基于深度卷积神经网络的人体图像头发检测方法
CN109408653A (zh) * 2018-09-30 2019-03-01 叠境数字科技(上海)有限公司 基于多特征检索和形变的人体发型生成方法

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002083318A (ja) * 2000-09-07 2002-03-22 Sony Corp 画像処理装置および方法、並びに記録媒体
KR100665371B1 (ko) * 2005-05-24 2007-01-04 곽노윤 반자동 필드 모핑을 이용한 다중 가상 헤어스타일 생성방법
EP2427857B1 (en) * 2009-05-04 2016-09-14 Oblong Industries, Inc. Gesture-based control systems including the representation, manipulation, and exchange of data
US8638993B2 (en) * 2010-04-05 2014-01-28 Flashfoto, Inc. Segmenting human hairs and faces
CN102419868B (zh) * 2010-09-28 2016-08-03 三星电子株式会社 基于3d头发模板进行3d头发建模的设备和方法
CN103218838A (zh) * 2013-05-11 2013-07-24 苏州华漫信息服务有限公司 一种用于人脸卡通化的自动头发绘制方法
CN103955962B (zh) 2014-04-21 2018-03-09 华为软件技术有限公司 一种虚拟人头发生成的装置及方法
WO2016032410A1 (en) * 2014-08-29 2016-03-03 Sagiroglu Seref Intelligent system for photorealistic facial composite production from only fingerprint
US9652688B2 (en) * 2014-11-26 2017-05-16 Captricity, Inc. Analyzing content of digital images
US9928601B2 (en) * 2014-12-01 2018-03-27 Modiface Inc. Automatic segmentation of hair in images
WO2016097732A1 (en) * 2014-12-16 2016-06-23 Metail Limited Methods for generating a 3d virtual body model of a person combined with a 3d garment image, and related devices, systems and computer program products
US10796480B2 (en) * 2015-08-14 2020-10-06 Metail Limited Methods of generating personalized 3D head models or 3D body models
US9864901B2 (en) * 2015-09-15 2018-01-09 Google Llc Feature detection and masking in images based on color distributions
CN105354869B (zh) * 2015-10-23 2018-04-20 广东小天才科技有限公司 一种将用户真实头部特征化身到虚拟头像上的方法及系统
CN105844706B (zh) * 2016-04-19 2018-08-07 浙江大学 一种基于单幅图像的全自动三维头发建模方法
WO2017181332A1 (zh) * 2016-04-19 2017-10-26 浙江大学 一种基于单幅图像的全自动三维头发建模方法
WO2017185301A1 (zh) * 2016-04-28 2017-11-02 华为技术有限公司 一种三维头发建模方法及装置
CN108463823B (zh) 2016-11-24 2021-06-01 荣耀终端有限公司 一种用户头发模型的重建方法、装置及终端
CN107886516B (zh) * 2017-11-30 2020-05-15 厦门美图之家科技有限公司 一种计算人像中发丝走向的方法及计算设备
US11023710B2 (en) * 2019-02-20 2021-06-01 Huawei Technologies Co., Ltd. Semi-supervised hybrid clustering/classification system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0897680A2 (en) * 1997-08-12 1999-02-24 Shiseido Company, Ltd. Method for selecting suitable hairstyle and image-map for hairstyle
KR20050018921A (ko) * 2005-02-01 2005-02-28 황지현 헤어스타일 자동 합성 방법 및 시스템
CN103366400A (zh) * 2013-07-24 2013-10-23 深圳市华创振新科技发展有限公司 一种三维头像自动生成方法
CN106372652A (zh) * 2016-08-28 2017-02-01 乐视控股(北京)有限公司 发型识别方法及发型识别装置
CN108280397A (zh) * 2017-12-25 2018-07-13 西安电子科技大学 基于深度卷积神经网络的人体图像头发检测方法
CN109408653A (zh) * 2018-09-30 2019-03-01 叠境数字科技(上海)有限公司 基于多特征检索和形变的人体发型生成方法

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHAI, MENGLEI: "Single Image 3D Hair Modeling Techniques and Applications", DISSERTATION - ZHEJIANG UNIVERSITY - DOCTOR OF PHILOSOPHY, no. 1, 31 January 2018 (2018-01-31) *
CHEN HONG ET AL.: "A Generative Sketch Model for Human Hair Analysis and Syn- thesis", DEPARTMENTS OF STATISTICS AND COMPUTER SCIENCE UNIVERSITY OF CALIFORNIA, LOS ANGELES, 31 July 2006 (2006-07-31), pages 1 - 35, XP055699789 *
FENG MIN ET AL.: "A Classified Method of Human Hair for Hair Sketching", 31 December 2008 (2008-12-31), pages 109 - 114, XP031287004 *
WANG NAN ET AL.: "Hair Style Retrieval by Semantic Mapping on Informative Patches", 31 December 2011 (2011-12-31), pages 110 - 114, XP032130112, DOI: 10.1109/ACPR.2011.6166682 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021256319A1 (ja) * 2020-06-15 2021-12-23 ソニーグループ株式会社 情報処理装置、情報処理方法および記録媒体
CN112819921A (zh) * 2020-11-30 2021-05-18 北京百度网讯科技有限公司 用于改变人物的发型的方法、装置、设备和存储介质
CN112819921B (zh) * 2020-11-30 2023-09-26 北京百度网讯科技有限公司 用于改变人物的发型的方法、装置、设备和存储介质
CN113269822A (zh) * 2021-05-21 2021-08-17 山东大学 用于3d打印的人物发型肖像重建方法及系统
CN113269822B (zh) * 2021-05-21 2022-04-01 山东大学 用于3d打印的人物发型肖像重建方法及系统

Also Published As

Publication number Publication date
CN109408653A (zh) 2019-03-01
US20200401842A1 (en) 2020-12-24
GB2581758B (en) 2021-04-14
GB2581758A (en) 2020-08-26
JP6891351B2 (ja) 2021-06-18
GB202009174D0 (en) 2020-07-29
US10891511B1 (en) 2021-01-12
KR102154470B1 (ko) 2020-09-09
CN109408653B (zh) 2022-01-28
JP2021507394A (ja) 2021-02-22
KR20200070409A (ko) 2020-06-17

Similar Documents

Publication Publication Date Title
WO2020063527A1 (zh) 基于多特征检索和形变的人体发型生成方法
Cheng et al. Exploiting effective facial patches for robust gender recognition
CN105844706B (zh) 一种基于单幅图像的全自动三维头发建模方法
CN110263659B (zh) 一种基于三元组损失和轻量级网络的指静脉识别方法及系统
US11403874B2 (en) Virtual avatar generation method and apparatus for generating virtual avatar including user selected face property, and storage medium
JP7512262B2 (ja) 顔キーポイント検出方法、装置、コンピュータ機器及びコンピュータプログラム
WO2018107979A1 (zh) 一种基于级联回归的多姿态的人脸特征点检测方法
CN101320484B (zh) 一种人脸虚图像生成的方法及一种三维人脸识别方法
WO2017181332A1 (zh) 一种基于单幅图像的全自动三维头发建模方法
Zhu et al. Discriminative 3D morphable model fitting
CN101561874B (zh) 一种人脸虚拟图像生成的方法
Lemaire et al. Fully automatic 3D facial expression recognition using differential mean curvature maps and histograms of oriented gradients
Yang et al. CNN based 3D facial expression recognition using masking and landmark features
JP2012160178A (ja) オブジェクト認識デバイス、オブジェクト認識を実施する方法および動的アピアランスモデルを実施する方法
CN103971112B (zh) 图像特征提取方法及装置
CN113570684A (zh) 图像处理方法、装置、计算机设备和存储介质
WO2022257456A1 (zh) 头发信息识别方法、装置、设备及存储介质
Angadi et al. Face recognition through symbolic modeling of face graphs and texture
CN110544310A (zh) 一种双曲共形映射下三维点云的特征分析方法
CN106919884A (zh) 面部表情识别方法及装置
CN104732247B (zh) 一种人脸特征定位方法
Zhong et al. Exploring features and attributes in deep face recognition using visualization techniques
Alsawwaf et al. In your face: person identification through ratios and distances between facial features
CN109886091B (zh) 基于带权重局部旋度模式的三维人脸表情识别方法
CN115908260B (zh) 模型训练方法、人脸图像质量评价方法、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19865679

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20207016099

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 202009174

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20190923

ENP Entry into the national phase

Ref document number: 2020533612

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19865679

Country of ref document: EP

Kind code of ref document: A1