WO2020063527A1 - 基于多特征检索和形变的人体发型生成方法 - Google Patents
基于多特征检索和形变的人体发型生成方法 Download PDFInfo
- Publication number
- WO2020063527A1 WO2020063527A1 PCT/CN2019/107263 CN2019107263W WO2020063527A1 WO 2020063527 A1 WO2020063527 A1 WO 2020063527A1 CN 2019107263 W CN2019107263 W CN 2019107263W WO 2020063527 A1 WO2020063527 A1 WO 2020063527A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- hair
- hairstyle
- hairstyles
- distance
- feature
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 210000004209 hair Anatomy 0.000 claims abstract description 143
- 238000012549 training Methods 0.000 claims abstract description 8
- 238000013135 deep learning Methods 0.000 claims abstract description 6
- 239000013598 vector Substances 0.000 claims description 23
- 210000003128 head Anatomy 0.000 claims description 19
- 230000009466 transformation Effects 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000013519 translation Methods 0.000 claims description 5
- 230000004927 fusion Effects 0.000 claims description 4
- 230000003741 hair volume Effects 0.000 claims description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 3
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 210000000887 face Anatomy 0.000 claims description 3
- 210000001061 forehead Anatomy 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 2
- 230000001568 sexual effect Effects 0.000 claims description 2
- 238000010276 construction Methods 0.000 claims 1
- 230000006870 function Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000001815 facial effect Effects 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 230000006978 adaptation Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000004761 scalp Anatomy 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2457—Query processing with adaptation to user needs
- G06F16/24578—Query processing with adaptation to user needs using ranking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
- G06F16/532—Query formulation, e.g. graphical querying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/56—Information retrieval; Database structures therefor; File system structures therefor of still image data having vectorial format
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5854—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using shape and object relationship
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/2163—Partitioning the feature space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/02—Affine transformations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/758—Involving statistics of pixels or of feature values, e.g. histogram matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the invention relates to the field of three-dimensional images, and in particular to a method for generating human hairstyle based on multi-feature retrieval and deformation.
- the generation of hair models belongs to the three-dimensional head reconstruction technology. It is an important part of the virtual character image and one of the most important features of the virtual person. Generally, the head is usually divided into two parts, the face and the hair.
- the current widely used method is to use a frontal photo and a side photo as the information source, extract the front and side facial and hair feature points of the character, generate a three-dimensional head model, and generate two based on the hair feature points.
- Dimensional head texture is mapped to a three-dimensional head model. According to the feature points of the hair area, the Kun's surface is used to fit the hair area, the Kun's surface is deformed, and texture mapping is performed.
- Modeling based on a single photo generally extracts useful prior knowledge from a three-dimensional face database, and then infers the corresponding three-dimensional model of the face in the photo. as well as:
- WO2016 / CN107121 discloses a method, device and terminal for reconstructing a user's hair model, including: obtaining a face image of a reconstructed user's face; determining an image of a hair region thereof; and combining a hair region with a three-dimensional 3D hair model in a database Perform matching to obtain the 3D hair model closest to the hair region image; determine the 3D hair model closest to the hair region as the 3D hair model of the reconstructed user.
- CN201680025609.1 Chinese patent application relates to a three-dimensional hair modeling method and device, including: determining a first coordinate transformation relationship between a 3D head model of a hair to be created and a preset reference head model, and determining a 3D head The second coordinate transformation relationship between the external model and the preset 3D hair template, based on the first coordinate transformation relationship and the second coordinate transformation relationship, register the 3D head model with the 3D hair template; the 3D hair template and the reference head The models match; when an error region is detected in the registered 3D hair template, the radial basis function RBF is used to deform the hair in the error region of the 3D hair template to correct the error region; the error region includes the uncorrected region in the 3D hair template.
- Completely occlude the scalp region of the 3D head model or the root region of the 3D hair template occlude the non-scalp region of the 3D head model.
- Chinese patent No. CN201310312500.4 relates to a method for automatically generating a three-dimensional avatar, including: a three-dimensional face database; collecting a three-dimensional hairstyle database; using a face detection algorithm to detect a human face for inputting a front face photo, and using an active shape model for positioning Face positive feature points; based on the 3D face database, input face photos and face feature point coordinates, a 3D face model is generated using the deformation model method; for the input face photos, Markov random field based Hair method to segment hair; to extract hair texture based on hair segmentation results; to obtain a final matching hair model; to combine a face model with a hair model. Avoid manually adding hairstyles, improve efficiency, and ensure high fidelity.
- CN201410161576.6 discloses a device for generating virtual human hair, including: an obtaining unit obtains a front face photo of a human face; a first determining unit determines a three-dimensional head model and determines a hair template based on the obtained front face photo of the human face; And the second determining unit determines, according to the adaptation value of the hair template determined by the first determining unit, in a correspondence relationship between a preset standard adaptation value of the hair template and description information of the standard hair template, The description information of the standard hair template corresponding to the adaptation value of the hair template; the generating unit according to the description information of the standard hair template determined by the second determination unit and the three-dimensional head determined by the first determination unit Model to obtain an exclusive hair template suitable for the three-dimensional head model.
- the present invention aims to provide a human hairstyle generation method based on multi-feature retrieval and deformation.
- the technical solution adopted by the present invention includes the following steps:
- the weights of several distances are mixed to obtain the dissimilarity score of each hairstyle in the hairstyle database. These scores are sorted and the smallest score is taken out, which is the required hair model.
- step three the feature points of the input face are matched and aligned with the feature points of the standard face.
- s is the scaling ratio
- ⁇ is the rotation angle
- t is the translation displacement
- step four first retrieve the hairstyle mask based on the Minkowski distance; then, perform weight overlap addition for salient features; second, retrieve the hairstyle based on the Hosdorf distance; finally, retrieve the hairstyle based on the hair flow information .
- step four the mask of the hairstyle to be retrieved is defined as H, and the mask of one hairstyle in the hairstyle database is B i .
- the corresponding Min's distance is:
- Min distance can be compared with the current input hairstyle and all hairstyles in the database, and finally sorted from small to large to obtain the score ranking vector M of the corresponding hairstyle in the hairstyle database.
- step 4 for the very significant features of the hairstyle, a correspondingly higher weight is given; for all the candidate hairstyles retrieved, the weight is increased by 25% based on the forehead portion;
- step 4 define the hairstyle mask to be retrieved as H and the standard hairstyle of the hairstyle library as B.
- the corresponding Horsdorf distance is:
- step four the flow direction field of the hair is obtained by using a gradient-based method.
- the horizontal gradient of the hair is obtained:
- step five the hairstyle recognition and matching based on the basic block of the hair volume: using the labeled different types of hair data and using the deep learning network to build a model training to obtain a hair network;
- step 6 the retrieval based on multi-feature fusion, that is, for M 2 , H, and L, assign weights a: b: c, and fuse the three vectors to obtain a comprehensive ranking vector F:
- Sort F from small to large, and select the top N positions as candidate hairstyles
- the straightness degree of hair curling is sorted, and the highest ranked candidate is selected as the candidate result of the last retrieval.
- the present invention uses a single frontal face photo to retrieve a three-dimensional hair model most similar to the photo in a large three-dimensional hairstyle database through a retrieval database, avoiding manual modeling, and improving efficiency;
- the model is deformed to a certain degree, so that the generated 3D hairstyle and the input picture are as similar as possible, ensuring a high degree of fidelity.
- FIG. 1 is an operation explanatory diagram of obtaining a mask
- FIG. 2 is a schematic diagram of a feature point labeled on a human face
- FIG. 3 is a flowchart of obtaining key points of a human face
- FIG. 5 is an operation flowchart of hairstyle identification and matching of a hair curl basic block
- C CONV layer (Conv layer)
- P Pooling layer (D): 2d dense block (2d dense block)
- T 2d transition layer (2d transition layer);
- 6a-d are schematic diagrams of hair style identification and matching of the basic block of the hair volume.
- This embodiment proposes a method for generating human hairstyles based on multi-feature retrieval and deformation based on high-precision three-dimensional portrait generation of people.
- the large-scale three-dimensional hairstyle database is retrieved according to the input photo and the photo is most suitable.
- Similar three-dimensional hair model method and the retrieved model is deformed to a certain extent, so that the generated three-dimensional hairstyle and the input picture are as similar as possible, so as to obtain the three-dimensional hairstyle of the input portrait.
- the first is the data pre-processing stage. It is necessary to render the front photos of all hairstyles in the 3D hairstyle library and the mask maps corresponding to the hair, and then determine the retrieval results by comparing the 2D images.
- Step one segment the hairstyle.
- the image In order to find the specific position of the hair in a single frontal face image, the image needs to be segmented first to obtain a shape mask for further comparison with the mask in the hairstyle database.
- FIG. 1 is a diagram illustrating an operation for obtaining a mask; segmentation of a hairstyle can be realized manually or automatically. among them:
- Step 2 Face feature point recognition and alignment.
- the positions of the hairs are substantially the same.
- the input portraits should be aligned with the feature points of the face.
- the algorithm uses cascaded regression factors.
- a series of calibrated face pictures need to be used as the training set.
- This embodiment uses an open source data set, which contains about 2,000 landmark training data, and uses this data set to train a DCNN elementary facial feature point predictor.
- the network uses the convolutional neural network structure as the basis for training.
- an initial shape is generated, that is, an approximate feature point position is estimated, and then a gradient boosting algorithm (gradient boosting) is used to reduce the initial shape and verified ground truth. Sum of squared errors. The least square method is used to minimize the error, and the cascade regression factor of each stage is obtained.
- S (t) represents the estimation of the current S state.
- Each regressor r t ( ⁇ , ⁇ ) is analyzed in each cascade from the current state S (t) and the input photo I.
- the formula is as follows:
- the most critical part of the cascade is that its prediction is based on features, for example, it is calculated from the gray value of the pixel from I, and it is related to the current state.
- each rt is trained using a regression tree learned by gradient boosting, and an error is minimized using a least square method.
- t represents the cascade number
- r t ( ⁇ , ⁇ ) represents the returner of the current stage.
- the input parameters of the regressor are the updated feature points of the image I and the upper-level regressor.
- the features used can be grayscale values or other.
- Each regressor consists of many trees, and the parameters of each tree are trained according to the current shape and the coordinate difference of the verified ground truth and randomly selected pixel pairs. Referring to FIG. 3, in the process of learning Tree, ERT directly stores the updated value of the shape into a leaf node. After the initial position S passes through all the learned trees, the mean shape (meanshape) is added to all the passed leaf nodes to obtain the final face key point position.
- Step 3 Align the image with the standard human face to get the corresponding hair area.
- s is the scaling ratio
- ⁇ is the rotation angle
- t is the translation displacement
- the following uses the least squares method to solve the rotation, translation, and scaling matrices, so that the first vector is aligned to the point of the second vector as much as possible.
- the two shape matrices are p and q, respectively. Each row of the matrix represents the x and y coordinates of a feature point. Assuming 68 feature point coordinates, then p ⁇ R 68 ⁇ 2 .
- the objective function of the least square method is:
- the warping function corresponding to the face alignment can be obtained, that is, the corresponding s, R, T.
- Applying the obtained warping function to the cut-out hairstyle mask, an aligned hairstyle can be obtained.
- Step 4 Calculate the Min's distance of the hair masks of all front human faces in the hair region and hairstyle database.
- Min distance can be used to compare the shape similarity of the two hairstyle masks. Min distance is more to judge the area of non-overlapping areas and non-overlapping areas. The more, the greater the relative Min distance value.
- the mask of the hairstyle to be retrieved is defined as H, and the mask of a hairstyle in the hairstyle database is B i .
- the corresponding Min's distance is:
- Min distance can be compared with the current input hairstyle and all hairstyles in the database, and finally sorted from small to large to obtain the score ranking vector M of the corresponding hairstyle in the hairstyle database.
- a correspondingly higher weight is given to make the retrieved hairstyles as similar as possible in the bangs.
- the weight will be increased based on the 25% of the forehead portion; assuming that the bangs area of the standard head is L, then after the face alignment, the L area of the input photo and the standard head L area Make comparisons-where inconsistencies, the weights are increased by 1.25 times, making each other less similar.
- some hairstyles may have very thin and long braids on both sides, so this feature may not have a very important effect on the area of the mask overlap, but it is very important for people's perception.
- Horsdorf distance is applied here so that the details of the hair can be retained accordingly. Horsdorf distance is actually measured by the difference between the two hairstyles, the most different.
- the hairstyle mask to be retrieved is defined as H
- the standard hairstyle of the hairstyle library is B.
- the corresponding Horsdorf distance is:
- the Min distance is compared between the current input hairstyle and all hairstyles in the database, and finally the corresponding ranking vector H is obtained by sorting from small to large.
- the gradient-based method is first used to obtain the flow direction field of the hair; usually, the flow direction of the hair should be perpendicular to the gradient field of the hair, so for the input hairstyle photo I First, find the horizontal gradient of the hair:
- step five hairstyle recognition and matching based on the basic block of hair volume: using a large amount of labeled different types of hair data (straight hair, curly hair, braided hair and small curly hair), using a deep learning network to build a model training HairNet.
- the input hair is first sampled through a Gaussian pyramid into input images of different scales and standard images in a hairstyle library.
- super hair (super pixel) segmentation is performed to obtain hair pieces of different sizes, and then these hair pieces are uniformly pulled up to obtain patches of the same size, which are imported into the hair network.
- the type to which the maximum probability of each piece belongs is finally obtained.
- the input hair and the candidate hairs in the hairstyle library are matched.
- the specific method is to divide the hair into blocks and then perform one-to-one corresponding multi-scale different point calculation to obtain the deviation values of different candidate hairs.
- Step 6 The retrieval based on multi-feature fusion includes:
- the ranking M 2 can be obtained, and then the ranking H can be calculated based on the Hosdorf distance, and combined with the hair's
- the flow direction is sorted L, weights a: b: c are assigned respectively, and the three vectors are fused to obtain a comprehensive sorting vector F.
- Sort F from small to large, and select the top N positions as candidate hairstyles.
- the previously trained HairNet is used to sort the straightness of hair curls, and the top ranked candidate is selected as the final search candidate R.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Mathematical Physics (AREA)
- Library & Information Science (AREA)
- Geometry (AREA)
- Computational Linguistics (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (10)
- 一种基于多特征检索和形变的人体发型生成方法,其特征在于包括如下步骤:步骤一,获取单张正面人脸图像中的发型掩膜;步骤二,识别人脸的特征点,和发型数据库中的人脸进行匹配;步骤三,将图像和标准人脸对齐,得到相应的头发区域;步骤四,计算所述头发区域和发型数据库中所有正面人脸的头发掩膜的闵氏距离;并在从小到大排序后赋予相应的权重;步骤五,计算若干最相似头发的流向场,与待检测发型做匹配运算;步骤六,训练深度学习网络,检测不同尺度下头发基础块的发型;对待检测头发图片与若干候选头发做多尺度下的直方图匹配,获得不同的匹配得分;最后,取出最相似的头发图片。
- 根据权利要求1所述的一种基于多特征检索和形变的人体发型生成方法,其特征在于:通过霍氏距离对头发的细节相似度进行计算,重复步骤四的步骤进行赋权;结合步骤四的权重对于匹配的发型进行排序,取出前十位最相似的发型。
- 根据权利要求2所述的一种基于多特征检索和形变的人体发型生成方法,其特征在于:步骤四中,首先基于闵氏距离,对发型掩膜进行检索;然后,针对显著特征,进行权重叠加;其次,基于霍斯道夫距离的对发型再次检索;最后,基于头发流向信息的发型检索。
- 根据权利要求5所述的一种基于多特征检索和形变的人体发型生成方法,其特征在于:步骤四中,对于发型的非常显著的特征,赋予相应更高的权重;对于检索到的所有候选发型,会基于额头部分25%的权重增加;设定在标准头部刘海区域为L,在通过人脸对齐之后,在输入照片的的L区域,和标准头部L区域做比对——不一致的地方,权重按照1.25倍增加;将这些显著性区域和之前的闵科夫斯基距离相加,排序,得到改进的闵科夫斯基距离向量M 2。
- 根据权利要求7所述的一种基于多特征检索和形变的人体发型生成方法,其特征在于:步骤四中,用基于梯度的方法求取头发的流向场;对于输入的发型照片I,首先求取头发的横向梯度:d x(i,j)=[I(i+1,j)-I(i-1,j)]/2再求取头发的纵向梯度:d y(i,j)=[I(i,j+1)-I(i,j-1)]/2那么头发的流向场C满足:[C x,C y]·[d x,d y] T=0带入可求得流向场C;将C的相似度作为衡量标准加入到排序中,得到排序向量L。
- 根据权利要求8所述的一种基于多特征检索和形变的人体发型生成方法,其特征在于:步骤五中,基于头发卷基础块的发型识别与匹配:利用标注好的不同类型的头发数据利用深度学习网络构建模型训练得到头发网络;将输入头发通过高斯金字塔采样出不同不同尺度的输入图像和发型库中的标准图像;通过对头发部分做超像素分割,再对这些头发块做一致性拉升,得到大小相同的小片;将小片导入头发网络中。
- 根据权利要求9所述的一种基于多特征检索和形变的人体发型生成方法,其特征在于:步骤六中,基于多特征融合的检索,即对于M 2、H和L,分别赋权重a:b:c,将三个向量融合得到综合排序向量F:F=aM 2+bH+cL将F从小到大排序,选出前N位作为候选发型;在这N个候选发型中,对头发卷直相似程度进行排序,选出排位最靠前的作为最后检索的候选结果。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/962,227 US10891511B1 (en) | 2018-09-30 | 2019-09-23 | Human hairstyle generation method based on multi-feature retrieval and deformation |
KR1020207016099A KR102154470B1 (ko) | 2018-09-30 | 2019-09-23 | 다중 특징 검색 및 변형에 기반한 3차원 인체 헤어스타일 생성 방법 |
GB2009174.0A GB2581758B (en) | 2018-09-30 | 2019-09-23 | Human hair style generation method based on multi-feature search and deformation |
JP2020533612A JP6891351B2 (ja) | 2018-09-30 | 2019-09-23 | 多特徴検索と変形に基づく人体髪型の生成方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811165895.9 | 2018-09-30 | ||
CN201811165895.9A CN109408653B (zh) | 2018-09-30 | 2018-09-30 | 基于多特征检索和形变的人体发型生成方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020063527A1 true WO2020063527A1 (zh) | 2020-04-02 |
Family
ID=65466814
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/107263 WO2020063527A1 (zh) | 2018-09-30 | 2019-09-23 | 基于多特征检索和形变的人体发型生成方法 |
Country Status (6)
Country | Link |
---|---|
US (1) | US10891511B1 (zh) |
JP (1) | JP6891351B2 (zh) |
KR (1) | KR102154470B1 (zh) |
CN (1) | CN109408653B (zh) |
GB (1) | GB2581758B (zh) |
WO (1) | WO2020063527A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112819921A (zh) * | 2020-11-30 | 2021-05-18 | 北京百度网讯科技有限公司 | 用于改变人物的发型的方法、装置、设备和存储介质 |
CN113269822A (zh) * | 2021-05-21 | 2021-08-17 | 山东大学 | 用于3d打印的人物发型肖像重建方法及系统 |
WO2021256319A1 (ja) * | 2020-06-15 | 2021-12-23 | ソニーグループ株式会社 | 情報処理装置、情報処理方法および記録媒体 |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109408653B (zh) * | 2018-09-30 | 2022-01-28 | 叠境数字科技(上海)有限公司 | 基于多特征检索和形变的人体发型生成方法 |
CN110415339B (zh) * | 2019-07-19 | 2021-07-13 | 清华大学 | 计算输入三维形体间的匹配关系的方法和装置 |
CN110738595B (zh) * | 2019-09-30 | 2023-06-30 | 腾讯科技(深圳)有限公司 | 图片处理方法、装置和设备及计算机存储介质 |
CN110874419B (zh) * | 2019-11-19 | 2022-03-29 | 山东浪潮科学研究院有限公司 | 一种人脸数据库快速检索技术 |
CN111192244B (zh) * | 2019-12-25 | 2023-09-29 | 新绎健康科技有限公司 | 一种基于关键点确定舌部特征的方法及系统 |
WO2022047463A1 (en) * | 2020-08-22 | 2022-03-03 | Snap Inc. | Cross-domain neural networks for synthesizing image with fake hair combined with real image |
CN113762305B (zh) * | 2020-11-27 | 2024-04-16 | 北京沃东天骏信息技术有限公司 | 一种脱发类型的确定方法和装置 |
CN112734633A (zh) * | 2021-01-07 | 2021-04-30 | 京东方科技集团股份有限公司 | 虚拟发型的替换方法、电子设备及存储介质 |
KR102507460B1 (ko) * | 2021-04-05 | 2023-03-07 | 고려대학교 산학협력단 | 카툰 배경 자동 생성 방법 및 그 장치 |
CN113112616A (zh) * | 2021-04-06 | 2021-07-13 | 济南大学 | 一种基于改进可编辑的三维生成方法 |
CN113129347B (zh) * | 2021-04-26 | 2023-12-12 | 南京大学 | 一种自监督单视图三维发丝模型重建方法及系统 |
CN113538455B (zh) * | 2021-06-15 | 2023-12-12 | 聚好看科技股份有限公司 | 三维发型匹配方法及电子设备 |
KR102542705B1 (ko) * | 2021-08-06 | 2023-06-15 | 국민대학교산학협력단 | 얼굴 추적 기반 행위 분류 방법 및 장치 |
CN113744286A (zh) * | 2021-09-14 | 2021-12-03 | Oppo广东移动通信有限公司 | 虚拟头发生成方法及装置、计算机可读介质和电子设备 |
FR3127602A1 (fr) * | 2021-09-27 | 2023-03-31 | Idemia Identity & Security France | procédé de génération d’une image augmentée et dispositif associé |
US12118779B1 (en) * | 2021-09-30 | 2024-10-15 | United Services Automobile Association (Usaa) | System and method for assessing structural damage in occluded aerial images |
CN114187633B (zh) * | 2021-12-07 | 2023-06-16 | 北京百度网讯科技有限公司 | 图像处理方法及装置、图像生成模型的训练方法及装置 |
CN115311403B (zh) * | 2022-08-26 | 2023-08-08 | 北京百度网讯科技有限公司 | 深度学习网络的训练方法、虚拟形象生成方法及装置 |
TWI818824B (zh) * | 2022-12-07 | 2023-10-11 | 財團法人工業技術研究院 | 用於計算遮蔽人臉影像的人臉擺動方向的裝置及方法 |
CN116503924B (zh) * | 2023-03-31 | 2024-01-26 | 广州翼拍联盟网络技术有限公司 | 人像头发边缘处理方法、装置、计算机设备及存储介质 |
CN116509118B (zh) * | 2023-04-26 | 2024-08-20 | 深圳市华南英才科技有限公司 | 一种超高转速吹风机的控制方法及系统 |
CN117389676B (zh) * | 2023-12-13 | 2024-02-13 | 成都白泽智汇科技有限公司 | 一种基于显示界面的智能发型适配展示方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0897680A2 (en) * | 1997-08-12 | 1999-02-24 | Shiseido Company, Ltd. | Method for selecting suitable hairstyle and image-map for hairstyle |
KR20050018921A (ko) * | 2005-02-01 | 2005-02-28 | 황지현 | 헤어스타일 자동 합성 방법 및 시스템 |
CN103366400A (zh) * | 2013-07-24 | 2013-10-23 | 深圳市华创振新科技发展有限公司 | 一种三维头像自动生成方法 |
CN106372652A (zh) * | 2016-08-28 | 2017-02-01 | 乐视控股(北京)有限公司 | 发型识别方法及发型识别装置 |
CN108280397A (zh) * | 2017-12-25 | 2018-07-13 | 西安电子科技大学 | 基于深度卷积神经网络的人体图像头发检测方法 |
CN109408653A (zh) * | 2018-09-30 | 2019-03-01 | 叠境数字科技(上海)有限公司 | 基于多特征检索和形变的人体发型生成方法 |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002083318A (ja) * | 2000-09-07 | 2002-03-22 | Sony Corp | 画像処理装置および方法、並びに記録媒体 |
KR100665371B1 (ko) * | 2005-05-24 | 2007-01-04 | 곽노윤 | 반자동 필드 모핑을 이용한 다중 가상 헤어스타일 생성방법 |
EP2427857B1 (en) * | 2009-05-04 | 2016-09-14 | Oblong Industries, Inc. | Gesture-based control systems including the representation, manipulation, and exchange of data |
US8638993B2 (en) * | 2010-04-05 | 2014-01-28 | Flashfoto, Inc. | Segmenting human hairs and faces |
CN102419868B (zh) * | 2010-09-28 | 2016-08-03 | 三星电子株式会社 | 基于3d头发模板进行3d头发建模的设备和方法 |
CN103218838A (zh) * | 2013-05-11 | 2013-07-24 | 苏州华漫信息服务有限公司 | 一种用于人脸卡通化的自动头发绘制方法 |
CN103955962B (zh) | 2014-04-21 | 2018-03-09 | 华为软件技术有限公司 | 一种虚拟人头发生成的装置及方法 |
WO2016032410A1 (en) * | 2014-08-29 | 2016-03-03 | Sagiroglu Seref | Intelligent system for photorealistic facial composite production from only fingerprint |
US9652688B2 (en) * | 2014-11-26 | 2017-05-16 | Captricity, Inc. | Analyzing content of digital images |
US9928601B2 (en) * | 2014-12-01 | 2018-03-27 | Modiface Inc. | Automatic segmentation of hair in images |
WO2016097732A1 (en) * | 2014-12-16 | 2016-06-23 | Metail Limited | Methods for generating a 3d virtual body model of a person combined with a 3d garment image, and related devices, systems and computer program products |
US10796480B2 (en) * | 2015-08-14 | 2020-10-06 | Metail Limited | Methods of generating personalized 3D head models or 3D body models |
US9864901B2 (en) * | 2015-09-15 | 2018-01-09 | Google Llc | Feature detection and masking in images based on color distributions |
CN105354869B (zh) * | 2015-10-23 | 2018-04-20 | 广东小天才科技有限公司 | 一种将用户真实头部特征化身到虚拟头像上的方法及系统 |
CN105844706B (zh) * | 2016-04-19 | 2018-08-07 | 浙江大学 | 一种基于单幅图像的全自动三维头发建模方法 |
WO2017181332A1 (zh) * | 2016-04-19 | 2017-10-26 | 浙江大学 | 一种基于单幅图像的全自动三维头发建模方法 |
WO2017185301A1 (zh) * | 2016-04-28 | 2017-11-02 | 华为技术有限公司 | 一种三维头发建模方法及装置 |
CN108463823B (zh) | 2016-11-24 | 2021-06-01 | 荣耀终端有限公司 | 一种用户头发模型的重建方法、装置及终端 |
CN107886516B (zh) * | 2017-11-30 | 2020-05-15 | 厦门美图之家科技有限公司 | 一种计算人像中发丝走向的方法及计算设备 |
US11023710B2 (en) * | 2019-02-20 | 2021-06-01 | Huawei Technologies Co., Ltd. | Semi-supervised hybrid clustering/classification system |
-
2018
- 2018-09-30 CN CN201811165895.9A patent/CN109408653B/zh active Active
-
2019
- 2019-09-23 JP JP2020533612A patent/JP6891351B2/ja active Active
- 2019-09-23 WO PCT/CN2019/107263 patent/WO2020063527A1/zh active Application Filing
- 2019-09-23 KR KR1020207016099A patent/KR102154470B1/ko active IP Right Grant
- 2019-09-23 US US16/962,227 patent/US10891511B1/en active Active
- 2019-09-23 GB GB2009174.0A patent/GB2581758B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0897680A2 (en) * | 1997-08-12 | 1999-02-24 | Shiseido Company, Ltd. | Method for selecting suitable hairstyle and image-map for hairstyle |
KR20050018921A (ko) * | 2005-02-01 | 2005-02-28 | 황지현 | 헤어스타일 자동 합성 방법 및 시스템 |
CN103366400A (zh) * | 2013-07-24 | 2013-10-23 | 深圳市华创振新科技发展有限公司 | 一种三维头像自动生成方法 |
CN106372652A (zh) * | 2016-08-28 | 2017-02-01 | 乐视控股(北京)有限公司 | 发型识别方法及发型识别装置 |
CN108280397A (zh) * | 2017-12-25 | 2018-07-13 | 西安电子科技大学 | 基于深度卷积神经网络的人体图像头发检测方法 |
CN109408653A (zh) * | 2018-09-30 | 2019-03-01 | 叠境数字科技(上海)有限公司 | 基于多特征检索和形变的人体发型生成方法 |
Non-Patent Citations (4)
Title |
---|
CHAI, MENGLEI: "Single Image 3D Hair Modeling Techniques and Applications", DISSERTATION - ZHEJIANG UNIVERSITY - DOCTOR OF PHILOSOPHY, no. 1, 31 January 2018 (2018-01-31) * |
CHEN HONG ET AL.: "A Generative Sketch Model for Human Hair Analysis and Syn- thesis", DEPARTMENTS OF STATISTICS AND COMPUTER SCIENCE UNIVERSITY OF CALIFORNIA, LOS ANGELES, 31 July 2006 (2006-07-31), pages 1 - 35, XP055699789 * |
FENG MIN ET AL.: "A Classified Method of Human Hair for Hair Sketching", 31 December 2008 (2008-12-31), pages 109 - 114, XP031287004 * |
WANG NAN ET AL.: "Hair Style Retrieval by Semantic Mapping on Informative Patches", 31 December 2011 (2011-12-31), pages 110 - 114, XP032130112, DOI: 10.1109/ACPR.2011.6166682 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021256319A1 (ja) * | 2020-06-15 | 2021-12-23 | ソニーグループ株式会社 | 情報処理装置、情報処理方法および記録媒体 |
CN112819921A (zh) * | 2020-11-30 | 2021-05-18 | 北京百度网讯科技有限公司 | 用于改变人物的发型的方法、装置、设备和存储介质 |
CN112819921B (zh) * | 2020-11-30 | 2023-09-26 | 北京百度网讯科技有限公司 | 用于改变人物的发型的方法、装置、设备和存储介质 |
CN113269822A (zh) * | 2021-05-21 | 2021-08-17 | 山东大学 | 用于3d打印的人物发型肖像重建方法及系统 |
CN113269822B (zh) * | 2021-05-21 | 2022-04-01 | 山东大学 | 用于3d打印的人物发型肖像重建方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN109408653A (zh) | 2019-03-01 |
US20200401842A1 (en) | 2020-12-24 |
GB2581758B (en) | 2021-04-14 |
GB2581758A (en) | 2020-08-26 |
JP6891351B2 (ja) | 2021-06-18 |
GB202009174D0 (en) | 2020-07-29 |
US10891511B1 (en) | 2021-01-12 |
KR102154470B1 (ko) | 2020-09-09 |
CN109408653B (zh) | 2022-01-28 |
JP2021507394A (ja) | 2021-02-22 |
KR20200070409A (ko) | 2020-06-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020063527A1 (zh) | 基于多特征检索和形变的人体发型生成方法 | |
Cheng et al. | Exploiting effective facial patches for robust gender recognition | |
CN105844706B (zh) | 一种基于单幅图像的全自动三维头发建模方法 | |
CN110263659B (zh) | 一种基于三元组损失和轻量级网络的指静脉识别方法及系统 | |
US11403874B2 (en) | Virtual avatar generation method and apparatus for generating virtual avatar including user selected face property, and storage medium | |
JP7512262B2 (ja) | 顔キーポイント検出方法、装置、コンピュータ機器及びコンピュータプログラム | |
WO2018107979A1 (zh) | 一种基于级联回归的多姿态的人脸特征点检测方法 | |
CN101320484B (zh) | 一种人脸虚图像生成的方法及一种三维人脸识别方法 | |
WO2017181332A1 (zh) | 一种基于单幅图像的全自动三维头发建模方法 | |
Zhu et al. | Discriminative 3D morphable model fitting | |
CN101561874B (zh) | 一种人脸虚拟图像生成的方法 | |
Lemaire et al. | Fully automatic 3D facial expression recognition using differential mean curvature maps and histograms of oriented gradients | |
Yang et al. | CNN based 3D facial expression recognition using masking and landmark features | |
JP2012160178A (ja) | オブジェクト認識デバイス、オブジェクト認識を実施する方法および動的アピアランスモデルを実施する方法 | |
CN103971112B (zh) | 图像特征提取方法及装置 | |
CN113570684A (zh) | 图像处理方法、装置、计算机设备和存储介质 | |
WO2022257456A1 (zh) | 头发信息识别方法、装置、设备及存储介质 | |
Angadi et al. | Face recognition through symbolic modeling of face graphs and texture | |
CN110544310A (zh) | 一种双曲共形映射下三维点云的特征分析方法 | |
CN106919884A (zh) | 面部表情识别方法及装置 | |
CN104732247B (zh) | 一种人脸特征定位方法 | |
Zhong et al. | Exploring features and attributes in deep face recognition using visualization techniques | |
Alsawwaf et al. | In your face: person identification through ratios and distances between facial features | |
CN109886091B (zh) | 基于带权重局部旋度模式的三维人脸表情识别方法 | |
CN115908260B (zh) | 模型训练方法、人脸图像质量评价方法、设备及介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19865679 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20207016099 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 202009174 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20190923 |
|
ENP | Entry into the national phase |
Ref document number: 2020533612 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19865679 Country of ref document: EP Kind code of ref document: A1 |