CN110543826A - Image processing method and device for virtual wearing of wearable product - Google Patents

Image processing method and device for virtual wearing of wearable product Download PDF

Info

Publication number
CN110543826A
CN110543826A CN201910721928.1A CN201910721928A CN110543826A CN 110543826 A CN110543826 A CN 110543826A CN 201910721928 A CN201910721928 A CN 201910721928A CN 110543826 A CN110543826 A CN 110543826A
Authority
CN
China
Prior art keywords
user
face
model
image processing
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910721928.1A
Other languages
Chinese (zh)
Inventor
杨剑伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shangshang Zhenbao (beijing) Network Technology Co Ltd
Original Assignee
Shangshang Zhenbao (beijing) Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shangshang Zhenbao (beijing) Network Technology Co Ltd filed Critical Shangshang Zhenbao (beijing) Network Technology Co Ltd
Priority to CN201910721928.1A priority Critical patent/CN110543826A/en
Publication of CN110543826A publication Critical patent/CN110543826A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • G06T3/147Transformations for image registration, e.g. adjusting or mapping for alignment of images using affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an image processing method and device for virtual wearing of a wearable product, wherein the method comprises the following steps: extracting facial feature points of the model and the user; calculating a rotation vector through the facial feature points of the user, and obtaining a deflection angle of the face of the user according to the rotation vector; the face of the model is replaced with the face of the user using the deflection angle. According to the technical scheme, the rotation vector is calculated through the facial feature points of the user, so that the deflection angle of the face of the user is obtained, and the face of the model is replaced by the face of the user through the deflection angle. The face positioning can be more accurate, and the wearing effect is more real and ideal.

Description

image processing method and device for virtual wearing of wearable product
Technical Field
The invention relates to an image processing method and an image processing device for virtual wearing of a wearable product.
background
the jewelry virtual try-on technology can superimpose virtual 3D jewelry on the real person image, and through face identification, the real-time tracking realizes the synthesis of ornament and human body image, and human body action and 3D product are synchronous to be mutual, present lifelike jewelry try-on effect. Most jewelry virtual try-in uses augmented reality technology to simulate the wearing effect of jewelry on a human body. Early virtual jewelry attempts used 2D image tracking techniques, requiring calculations using markers or manual calibration. In recent years, the 3D image sensor developed by Microsoft and Intel improves the speed and the precision, but still has the problems of inaccurate face positioning, unreal wearing effect, more complicated operation and the like.
The prior art generally lacks angle analysis corresponding to the face, and the try-on effect is not ideal when the deflection angles of the model and the face of the user are too large. Moreover, the current technology generally adopts a GAN training scheme, which requires high cost for data collection, and may involve copyright problem and the like.
Disclosure of Invention
In view of the above problems in the related art, the present invention provides an image processing method and an image processing apparatus for virtual wearing of a wearable product.
The technical scheme of the invention is realized as follows:
according to an aspect of the present invention, there is provided an image processing method for virtual wearing of a wearable product, including:
extracting facial feature points of the model and the user;
Calculating a rotation vector through the facial feature points of the user, and obtaining a deflection angle of the face of the user according to the rotation vector;
the face of the model is replaced with the face of the user using the deflection angle.
According to an embodiment of the present application, extracting facial feature points of a model and a user includes: facial feature points of the model and facial feature points of the user are extracted by the face detector.
According to the embodiment of the application, after extracting the model and the facial feature points of the user, the method further comprises the following steps: and when the difference between the skin colors of the model and the user exceeds a preset value, correspondingly adjusting the color information.
According to an embodiment of the present application, replacing the face of the model with the face of the user using the deflection angle includes: the face of the user is subjected to the pilfer transformation and the affine transformation so that the face of the user is aligned with the face of the model.
according to an embodiment of the application, performing the pilfer transformation includes: minimizing the sum of the distances of all shapes to the average shape using a minimization formula, wherein the minimization formula is:
where X represents a shape and is a vector composed of a set of points.
According to an embodiment of the present application, performing affine transformation includes: finding a first shape and a first transformation; a first transformation aligns each data with a first shape, wherein whether alignment is achieved is determined by transforming the least squares distance between the shapes.
according to another aspect of the present invention, there is provided an image processing apparatus for virtual wearing of a wearable product, comprising:
The feature extraction module is used for extracting the model and facial feature points of the user;
The calculation module is used for calculating a rotation vector through the facial feature points of the user and obtaining the deflection angle of the face of the user according to the rotation vector;
And the replacing module is used for replacing the face of the model with the face of the user by using the deflection angle.
According to an embodiment of the application, the feature extraction module includes: and the face detector is used for extracting the facial feature points of the model and the facial feature points of the user.
According to an embodiment of the application, the calculation module is configured to perform a pilfer transformation and an affine transformation on the face of the user such that the face of the user is aligned with the face of the model.
according to an embodiment of the application, the calculation module minimizes the sum of distances of all shapes to the average shape using a minimization formula, wherein the minimization formula is:
wherein X represents a shape and is a vector consisting of a set of points;
The calculation module is further configured to find a first shape and a first transformation, wherein the first transformation aligns each datum using the first shape and determines whether the alignment is achieved by transforming a least squares distance between the shapes.
according to the technical scheme, the rotation vector is calculated through the facial feature points of the user, so that the deflection angle of the face of the user is obtained, and the face of the model is replaced by the face of the user through the deflection angle. The face positioning can be more accurate, and the wearing effect is more real and ideal.
drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart of an image processing method for virtual wearing of a wearable product according to an embodiment of the present invention;
FIG. 2 is a flow chart of analyzing pictures uploaded by a user according to an embodiment of the present invention;
FIG. 3 is a schematic illustration of Procrustes analysis according to an embodiment of the invention;
FIG. 4 is a flow diagram for performing 3D synthesis according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present invention.
due to the particularity of the jewelry industry, the precious jewelry can have the defects of safety problem, non-portability and the like when being worn, and the appearance of virtual try-on of the jewelry has wide influence on the business of jewelry e-commerce and physical stores. In e-commerce business, jewelry virtual fitting technology can realize the presentation of jewelry fitting effect without contacting actual jewelry products. In a physical store, the method is also very helpful for improving the standing rate, the transaction rate and the reputation of the customer entering the store. Through the technology, the user can simply upload own photos, and the requirement of testing wearing of various jewelry in different scenes is met.
Based on the above purpose, the present invention provides an image processing method for virtual wearing of wearable products. As shown in fig. 1, the method comprises the steps of:
s10, extracting the model and the facial feature points of the user;
S20, calculating a rotation vector through the facial feature points of the user, and obtaining the deflection angle of the face of the user according to the rotation vector;
S30, replacing the face of the model with the face of the user using the yaw angle.
according to the technical scheme, the rotation vector is calculated through the facial feature points of the user, so that the deflection angle of the face of the user is obtained, and the face of the model is replaced by the face of the user through the deflection angle. The face positioning can be more accurate, and the wearing effect is more real and ideal.
In one embodiment, at step S10, extracting the model and the facial feature points of the user includes: facial feature points of the model and facial feature points of the user are extracted by the face detector. In one embodiment, the following steps may be further included after step S10: and when the difference between the skin colors of the model and the user exceeds a preset value, correspondingly adjusting the color information.
In one embodiment, at step S30, the face of the user may be subjected to a poundard transformation and an affine transformation such that the face of the user is aligned with the face of the model.
Wherein, performing the pilfer transformation comprises: minimizing the sum of the distances of all shapes to the average shape using a minimization formula, wherein the minimization formula is:
where X represents a shape and is a vector composed of a set of points.
performing an affine transformation includes: finding a first shape and a first transformation; a first transformation aligns each data with a first shape, wherein whether alignment is achieved is determined by transforming the least squares distance between the shapes.
In order to better understand the technical solution of the image processing method of the present invention, a specific example of the virtual wearing method is described below.
in the process of replacing the human face, the virtual wearing method firstly extracts the model and the facial feature point information of the user according to the selected human face. Then, affine transformation and a face alignment algorithm are used for carrying out region extraction and mask covering on a specific region (reference key point) of the image. Due to factors such as excessive difference between the skin color of the target person and the skin color of the user, the color information needs to be adjusted accordingly. And finally, seamlessly fusing the corresponding face parts by using a Poisson fusion technology.
if a plurality of faces appear in a static image, the user needs to manually click to select the target face to be exchanged. If the faces of the model and the user are different in the static picture, the scene is not suitable for the experience of face changing effect. Since in this technique the orientation of the yaw angle of the face needs to be compared. The face changing effect is very poor for the faces with different deflection angles, and the user experience is affected.
under the condition that the human face deflection angles are close, the specific operation flow of the technology is as follows: the method comprises the steps of firstly positioning a 3D (three-dimensional) model of six key points (such as a left eye corner, a right eye corner, a nose tip, a left mouth corner, a right mouth corner and a lower jaw), then calling an existing open source face detector Dlib to extract corresponding face key points, solving a corresponding rotation vector through a mathematical matrix and a vector space equation, and finally converting the rotation vector into an Euler angle for analysis to complete replacement.
a 2D (two-dimensional) prediction process is then performed. In the 2D pre-estimation process, the system will analyze the picture uploaded by the user, and the specific analysis process is as shown in fig. 2, and the 2D face parameters are pre-estimated. When the parameters are estimated, the system sends the parameters to the 3D synthesis module, so that the human face is replaced more realistically.
the pre-estimation in the 2D parameters is to extract feature points of a human face, then extract a mask of a scene picture, adjust the color difference between the extracted facial features of the mask and a target scene, finally perform preliminary estimation through the Purchase transformation and the affine transformation, and lay a foundation for 3D human face replacement.
The basic idea of the pilfer transformation is to minimize the sum of the distances of all shapes to the average shape, i.e. minimize the formula:
where X represents a shape and is a vector consisting of a set of points. For example, X ═ X (X0, X1, X2, x3..., xn-1, y0, y1, y2... yn-1), or may be staggered: x ═ X0, y0, X1, y1,. xn-1, yn-1), where 0,1,2.. n-1 is a subscript.
To model the deformation of the face shape, we must first process the raw marker data, removing components that belong to global rigid motion. Rigid motions usually behave as similar transformations when geometrically modeled in 2 dimensions. Such transformations include scale, in-plane rotation, and translation.
as shown in fig. 3, showing a set of motion types under similar transformations, the process of removing global rigid motion from a set of points becomes Procrustes analysis.
Mathematically, the objective of Procrustes analysis is to find both an authoritative shape and a similarity transformation that aligns each data instance with this authoritative shape. Alignment is measured here by transforming the least squares distance between the shapes. This is accomplished through an iterative process, implemented through the shape _ model class.
Followed by a 3D synthesis process. As shown in fig. 4, the 3D algorithm flow performs face triangle analysis based on the 2D algorithm, so as to obtain a more accurate 3D model. And then Poisson fusion is carried out on the fine 3D model.
According to the embodiment of the invention, an image processing device for virtual wearing of a wearable product is also provided. The image processing apparatus may be configured to execute a program to implement the image processing method described above. The image processing apparatus may include:
the feature extraction module is used for extracting the model and facial feature points of the user;
The calculation module is used for calculating a rotation vector through the facial feature points of the user and obtaining the deflection angle of the face of the user according to the rotation vector;
and the replacing module is used for replacing the face of the model with the face of the user by using the deflection angle.
In one embodiment, the feature extraction module comprises: and the face detector is used for extracting the facial feature points of the model and the facial feature points of the user.
In one embodiment, the calculation module is to perform a pilfer transformation and an affine transformation on the face of the user such that the face of the user is aligned with the face of the model.
in one embodiment, the calculation module utilizes a minimization formula to minimize the sum of the distances of all shapes to the average shape, wherein the minimization formula is:
Wherein X represents a shape and is a vector consisting of a set of points;
The calculation module is further configured to find a first shape and a first transformation, wherein the first transformation aligns each datum using the first shape and determines whether the alignment is achieved by transforming a least squares distance between the shapes.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. An image processing method for virtual wearing of a wearable product, comprising:
extracting facial feature points of the model and the user;
Calculating a rotation vector through the facial feature points of the user, and obtaining a deflection angle of the face of the user according to the rotation vector;
replacing the face of the model with the face of the user using the deflection angle.
2. The image processing method according to claim 1, wherein extracting the model and the facial feature points of the user comprises:
and extracting facial feature points of the model and facial feature points of the user through a human face detector.
3. The image processing method according to claim 1, further comprising, after extracting the model and the facial feature points of the user:
and when the difference between the skin colors of the model and the user exceeds a preset value, correspondingly adjusting the color information.
4. The image processing method of claim 1, wherein replacing the face of the model with the face of the user using the yaw angle comprises:
Performing a Pockey transformation and an affine transformation on the face of the user such that the face of the user is aligned with the face of the model.
5. The image processing method according to claim 4, wherein performing the Poisson's transform comprises:
Minimizing the sum of distances of all shapes to an average shape using a minimization formula, wherein the minimization formula is:
where X represents a shape and is a vector composed of a set of points.
6. The image processing method according to claim 4, wherein performing the affine transformation comprises:
Finding a first shape and a first transformation;
The first transformation aligns each data with the first shape, wherein the alignment is determined by transforming least squares distances between shapes.
7. An image processing apparatus for virtual wearing of a wearable product, comprising:
the feature extraction module is used for extracting the model and facial feature points of the user;
The calculation module is used for calculating a rotation vector through the facial feature points of the user and obtaining the deflection angle of the face of the user according to the rotation vector;
A replacement module to replace a face of the model with a face of the user using the deflection angle.
8. The image processing apparatus according to claim 1, wherein the feature extraction module includes:
And the face detector is used for extracting the facial feature points of the model and the facial feature points of the user.
9. The image processing apparatus according to claim 1, wherein the calculation module is configured to perform a pilfer transformation and an affine transformation on the face of the user so that the face of the user is aligned with the face of the model.
10. The image processing apparatus of claim 9, wherein the calculation module minimizes the sum of distances of all shapes to the average shape using a minimization formula, wherein the minimization formula is:
Wherein X represents a shape and is a vector consisting of a set of points;
the computing module is further configured to find a first shape and a first transformation, wherein the first transformation aligns each datum using the first shape, and determines whether the alignment is achieved by transforming a least squares distance between shapes.
CN201910721928.1A 2019-08-06 2019-08-06 Image processing method and device for virtual wearing of wearable product Pending CN110543826A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910721928.1A CN110543826A (en) 2019-08-06 2019-08-06 Image processing method and device for virtual wearing of wearable product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910721928.1A CN110543826A (en) 2019-08-06 2019-08-06 Image processing method and device for virtual wearing of wearable product

Publications (1)

Publication Number Publication Date
CN110543826A true CN110543826A (en) 2019-12-06

Family

ID=68710070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910721928.1A Pending CN110543826A (en) 2019-08-06 2019-08-06 Image processing method and device for virtual wearing of wearable product

Country Status (1)

Country Link
CN (1) CN110543826A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237583A (en) * 2023-11-16 2023-12-15 创云融达信息技术(天津)股份有限公司 Virtual fitting method and system based on uploading head portrait

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069746A (en) * 2015-08-23 2015-11-18 杭州欣禾圣世科技有限公司 Video real-time human face substitution method and system based on partial affine and color transfer technology
CN107274493A (en) * 2017-06-28 2017-10-20 河海大学常州校区 A kind of three-dimensional examination hair style facial reconstruction method based on mobile platform
CN107749084A (en) * 2017-10-24 2018-03-02 广州增强信息科技有限公司 A kind of virtual try-in method and system based on 3-dimensional reconstruction technology
CN108537126A (en) * 2018-03-13 2018-09-14 东北大学 A kind of face image processing system and method
CN109035413A (en) * 2017-09-01 2018-12-18 深圳市云之梦科技有限公司 A kind of virtually trying method and system of anamorphose
CN109819313A (en) * 2019-01-10 2019-05-28 腾讯科技(深圳)有限公司 Method for processing video frequency, device and storage medium
CN109977739A (en) * 2017-12-28 2019-07-05 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110084676A (en) * 2019-04-24 2019-08-02 深圳市观梦科技有限公司 Method of Commodity Recommendation, the network terminal and the device with store function on a kind of line

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069746A (en) * 2015-08-23 2015-11-18 杭州欣禾圣世科技有限公司 Video real-time human face substitution method and system based on partial affine and color transfer technology
CN107274493A (en) * 2017-06-28 2017-10-20 河海大学常州校区 A kind of three-dimensional examination hair style facial reconstruction method based on mobile platform
CN109035413A (en) * 2017-09-01 2018-12-18 深圳市云之梦科技有限公司 A kind of virtually trying method and system of anamorphose
CN107749084A (en) * 2017-10-24 2018-03-02 广州增强信息科技有限公司 A kind of virtual try-in method and system based on 3-dimensional reconstruction technology
CN109977739A (en) * 2017-12-28 2019-07-05 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108537126A (en) * 2018-03-13 2018-09-14 东北大学 A kind of face image processing system and method
CN109819313A (en) * 2019-01-10 2019-05-28 腾讯科技(深圳)有限公司 Method for processing video frequency, device and storage medium
CN110084676A (en) * 2019-04-24 2019-08-02 深圳市观梦科技有限公司 Method of Commodity Recommendation, the network terminal and the device with store function on a kind of line

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237583A (en) * 2023-11-16 2023-12-15 创云融达信息技术(天津)股份有限公司 Virtual fitting method and system based on uploading head portrait
CN117237583B (en) * 2023-11-16 2024-02-09 创云融达信息技术(天津)股份有限公司 Virtual fitting method and system based on uploading head portrait

Similar Documents

Publication Publication Date Title
Xu et al. Ghum & ghuml: Generative 3d human shape and articulated pose models
Yang et al. Physics-inspired garment recovery from a single-view image
Achenbach et al. Fast generation of realistic virtual humans
Rhodin et al. General automatic human shape and motion capture using volumetric contour cues
Pujades et al. The virtual caliper: rapid creation of metrically accurate avatars from 3D measurements
Chen et al. Tensor-based human body modeling
Yang et al. Detailed garment recovery from a single-view image
CN111553284A (en) Face image processing method and device, computer equipment and storage medium
KR20160098560A (en) Apparatus and methdo for analayzing motion
Hu et al. 3DBodyNet: fast reconstruction of 3D animatable human body shape from a single commodity depth camera
Ye et al. Free-viewpoint video of human actors using multiple handheld kinects
CN108305321B (en) Three-dimensional human hand 3D skeleton model real-time reconstruction method and device based on binocular color imaging system
CN113449570A (en) Image processing method and device
CN110751097A (en) Semi-supervised three-dimensional point cloud gesture key point detection method
CN111897422B (en) Real object interaction method and system for real-time fusion of virtual and real objects
CN110378871A (en) Game charater original painting copy detection method based on posture feature
Hong et al. A 3D model-based approach for fitting masks to faces in the wild
Song et al. Data-driven 3-D human body customization with a mobile device
Yu et al. Inpainting-based virtual try-on network for selective garment transfer
CN115482062A (en) Virtual fitting method and device based on image generation
Darujati et al. Facial motion capture with 3D active appearance models
CN110543826A (en) Image processing method and device for virtual wearing of wearable product
Alemany et al. Three-dimensional body shape modeling and posturography
WO2023214093A1 (en) Accurate 3d body shape regression using metric and/or semantic attributes
CN108154088B (en) Method and system for detecting side face of shopping guide machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination