CN117237583A - Virtual fitting method and system based on uploading head portrait - Google Patents
Virtual fitting method and system based on uploading head portrait Download PDFInfo
- Publication number
- CN117237583A CN117237583A CN202311523000.5A CN202311523000A CN117237583A CN 117237583 A CN117237583 A CN 117237583A CN 202311523000 A CN202311523000 A CN 202311523000A CN 117237583 A CN117237583 A CN 117237583A
- Authority
- CN
- China
- Prior art keywords
- virtual
- head portrait
- clothing
- user
- virtual character
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 230000001815 facial effect Effects 0.000 claims abstract description 42
- 238000005516 engineering process Methods 0.000 claims abstract description 28
- 230000000694 effects Effects 0.000 claims abstract description 23
- 238000001514 detection method Methods 0.000 claims abstract description 21
- 238000009877 rendering Methods 0.000 claims abstract description 14
- 238000012545 processing Methods 0.000 claims abstract description 10
- 239000003086 colorant Substances 0.000 claims abstract description 3
- 238000013507 mapping Methods 0.000 claims description 19
- 230000033001 locomotion Effects 0.000 claims description 16
- 238000004458 analytical method Methods 0.000 claims description 13
- 230000004927 fusion Effects 0.000 claims description 12
- 230000011218 segmentation Effects 0.000 claims description 12
- 210000000988 bone and bone Anatomy 0.000 claims description 9
- 238000005286 illumination Methods 0.000 claims description 9
- 238000005070 sampling Methods 0.000 claims description 7
- 238000004088 simulation Methods 0.000 claims description 7
- 230000001133 acceleration Effects 0.000 claims description 6
- 239000000463 material Substances 0.000 claims description 6
- 238000003909 pattern recognition Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 230000009467 reduction Effects 0.000 claims description 6
- 238000007493 shaping process Methods 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 230000007704 transition Effects 0.000 claims description 4
- 230000009471 action Effects 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 3
- 238000013016 damping Methods 0.000 claims description 3
- 238000013501 data transformation Methods 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 230000003993 interaction Effects 0.000 claims description 3
- 238000011068 loading method Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 239000002245 particle Substances 0.000 claims description 3
- 239000013598 vector Substances 0.000 claims description 3
- 241000228740 Procrustes Species 0.000 claims description 2
- 230000003042 antagnostic effect Effects 0.000 claims description 2
- 230000006399 behavior Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000008485 antagonism Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a virtual fitting method and a system based on an uploading head portrait, which relate to the field of computer vision and the technical field of image processing and comprise the following steps: uploading a head portrait picture, generating a virtual character model through three-dimensional modeling software, identifying the face position in the head portrait picture through a face detection algorithm, extracting facial feature points of the head portrait of the character, and aligning the facial feature points with the face of the virtual character model; rendering the clothing by using three-dimensional modeling software to construct virtual clothing, connecting the virtual clothing with a skeletal system of a human body, recognizing and explaining gestures by using a physical gesture recognition technology based on sensors, selecting clothing styles, colors and patterns according to the gestures, and rotating the virtual character model to check the effect of wearing the clothing. The invention utilizes the computer vision and face recognition technology, the system can accurately analyze the facial and physical characteristics of the user, and the virtual clothing and the head portrait of the user are fused, so that the more real and vivid try-on effect is displayed.
Description
Technical Field
The invention relates to the field of computer vision and the technical field of image processing, in particular to a virtual fitting method and a virtual fitting system based on an uploading head portrait.
Background
With the explosive development of electronic commerce, more and more users choose to shop on the internet. However, in the conventional online shopping process, trying on the clothing is an unavoidable problem. The user cannot try on in person in a fitting room like a physical store, and cannot judge the fit, appearance effect and comfort of the clothes. This gives rise to hesitation and uncertainty in purchasing for the user, and also results in high return rates, which gives rise to confusion for merchants and consumers. To solve this problem, virtual fitting techniques have been developed.
In recent years, remarkable progress has been made in the fields of computer vision and image processing, particularly in face recognition, pose estimation, image synthesis, and the like. The development of these technologies provides a technological base for virtual try-on systems, enabling accurate identification of the user's body contours and key parts, and fusion of virtual clothing with the user's head portraits. As the virtual reality technology is a computer-generated technology simulating a real environment, it has been widely used in fields of games, entertainment, education, and the like.
However, existing virtual fitting systems still have some problems. One of the main problems is inaccuracy and difficulty in uploading a personal head portrait by a user, which may be difficult for the user to take a suitable photograph, and facial and physical features may be distorted due to photograph quality and angle, and deviation is generated when the user fuses with virtual clothing, so that the user is not normative and unreal to link with the virtual clothing when trying on. This limits the performance of the virtual fit system in terms of accuracy and fidelity, affecting the shopping experience of the user.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a virtual fitting method and a virtual fitting system based on an uploading head portrait, so as to solve the problems that the performance of a virtual fitting system in terms of accuracy and fidelity is limited and shopping experience of a user is influenced in the prior art.
The invention specifically provides the following technical scheme: a virtual fitting method based on uploading head portraits comprises the following steps:
uploading a head portrait picture of a user, and shaping the shape and the form of the head portrait picture by using three-dimensional modeling software to generate a virtual character model of the user;
a face detection algorithm is used for the head portrait picture, a boundary frame of a face area is recognized according to different illumination and expression changes, and the position of a face is positioned in the head portrait picture;
extracting facial features of the head portrait pictures after the face detection algorithm is used, and determining facial feature points in the head portrait pictures of the user;
aligning facial feature points in the user head portrait picture with the face of the virtual character model through Procludes analysis, and carrying out feature fusion on the aligned features by generating an antagonistic network GAN on the basis of texture mapping;
rendering the clothing by using three-dimensional modeling software, constructing virtual clothing, and connecting the virtual clothing with a skeleton system of a virtual character model through a skeleton binding technology;
capturing acceleration, angular velocity and position data of a hand of a user through a physical gesture recognition technology based on a sensor, and recognizing and explaining the gesture through pattern recognition and algorithm processing;
and selecting the clothes style, the colors and the patterns according to the gestures, and rotating the virtual character model to check the effect of wearing the clothes.
Preferably, the uploading the head portrait picture of the user includes the following steps:
providing a user interface, and selecting to take a picture or to select an avatar picture from the album through the user interface;
and adjusting the size and converting the format of the selected head portrait picture through image preprocessing to obtain an image for three-dimensional construction model recognition and analysis.
Preferably, the method for modeling the shape and the form of the avatar image by using three-dimensional modeling software, and generating the virtual character model of the user, includes the following steps:
shaping the overall shape of the avatar image through stretching, scaling and deforming of the basic geometry using three-dimensional modeling software, and adding details and features using engraving tools;
selecting an articulation point for the virtual character model, fusing skeleton attributes with the articulation point of the virtual character model, and connecting the skeleton with each part of the virtual character model;
assigning texture coordinates to each vertex or each face of the virtual character model using texture mapping techniques and obtaining color or texture information corresponding to the virtual character model surface from the texture image;
and obtaining smooth texture transition from texture coordinates of the vertexes or faces of the model surface through texture interpolation.
Preferably, the feature extraction is performed on the face of the head portrait picture after the face detection algorithm is used, and the facial feature points in the head portrait picture of the user are determined, including the following steps:
extracting basic characteristics by using a local binary pattern or gradient direction histogram method, and capturing texture, edge and shape information of a human face;
carrying out proper data transformation on the extracted basic features, carrying out PCA dimension reduction on the transformed basic features, calculating a covariance matrix, and carrying out feature value decomposition on the basic features;
and selecting the top k feature vectors which are most important as main components, and fusing the feature subjected to PCA dimension reduction with other basic features so as to determine all the features of the head portrait of the target person.
Preferably, the aligning the facial feature points in the user avatar image with the face of the virtual character model includes the following steps:
performing similar alignment on facial feature points in the user avatar picture and facial feature point positions of the virtual character model through rotation, scaling and transformation operations;
and mapping the pixel values in the target character head image to the corresponding positions of the virtual character face model according to the information after the facial feature points are aligned.
Preferably, the rendering of the garment by using three-dimensional modeling software constructs a virtual garment, including the following steps:
loading a three-dimensional model of the garment into a virtual fitting system, wherein the three-dimensional model of the garment is created by a garment designer or manufacturer using professional three-dimensional modeling software, and contains details and shape information of the garment;
carrying out UV mapping on the three-dimensional model of the garment, and sampling on the three-dimensional model to obtain texture coordinates of each vertex;
acquiring corresponding texture data from the texture image according to the UV coordinates of the vertexes in the sampling process, and defining the appearance and material characteristics of the garment surface;
according to the acquired texture data, applying textures and materials to the three-dimensional model surface of the garment, and calculating illumination, shadow and reflection effects of the surface by using a shader or a rendering engine;
and adjusting the rigidity, damping coefficient and mass parameter of mass points of the spring on the garment through a spring mass point system in the three-dimensional modeling software, changing the motion behavior of the garment, and performing physical simulation of the motion behavior of the garment and dynamic effect adjustment of the motion of the garment.
Preferably, the physical simulation of the clothing sport behavior and the dynamic effect adjustment of the clothing sport comprise the following steps:
the elasticity degree of the garment is controlled by adjusting the rigidity of the spring;
the weight sense of the clothing is changed by adjusting the mass of the particles, and the dynamic effect of the clothing during movement is adjusted.
Preferably, the connecting the virtual garment with the skeletal system of the virtual character model by the skeletal binding technology comprises the following steps:
assigning a weight value related to the skeleton to each vertex of the virtual garment, and dragging the vertex of the garment model to deform through the action of the skeleton;
the skin is weighted to promote the virtual clothes to follow the motions of the bones of the figures when in motion.
Preferably, after the virtual garment is connected with the skeletal system of the virtual character model through the skeletal binding technology, the method further comprises the following steps:
inputting an image to be segmented into a Mask R-CNN network model to generate a segmentation Mask of a virtual character and clothing;
using the connected region analysis to find the connected region of the virtual character and the clothing, and carrying out additional correction and filtering according to the criteria to obtain a segmentation result;
and visualizing the segmentation result, generating a segmented image, and displaying the virtual character and the clothing separately.
The invention also comprises a virtual try-on system based on shooting uploading head portraits, which comprises the following steps:
the modeling module is used for uploading the head portrait picture of the user, and modeling the shape and the form of the head portrait picture by using three-dimensional modeling software to generate a virtual character model of the user;
the face detection module is used for using a face detection algorithm for the head portrait picture, identifying a boundary box of a face area according to different illumination and expression changes, and positioning the position of the face in the head portrait picture;
the facial feature extraction module is used for extracting facial features of the head portrait picture after the face detection algorithm is used, and determining facial feature points in the head portrait picture of the user;
the face fusion module is used for aligning facial feature points in the head portrait picture of the user with the face of the virtual character model through Procrustes analysis, and carrying out feature fusion by generating features aligned by the countermeasure network GAN on the basis of texture mapping;
the virtual fitting module is used for rendering the clothing by using three-dimensional modeling software, constructing virtual clothing, and connecting the virtual clothing with a skeleton system of the virtual character model through a skeleton binding technology;
the gesture recognition module is used for capturing the acceleration, angular velocity and position data of the hand of the user through a physical gesture recognition technology based on a sensor, and recognizing and explaining the gesture through pattern recognition and algorithm processing;
and the user interaction module is used for selecting the style, the color and the pattern of the clothing according to the gestures, rotating the virtual character model and checking the effect of putting on the clothing.
Compared with the prior art, the invention has the following remarkable advantages:
according to the invention, the facial and body characteristics of a user can be accurately analyzed by utilizing the computer vision and face recognition technology, the Procludes analysis technology is used for aligning the head portrait picture of the user with the face of the virtual character model, on the basis of texture mapping, feature fusion is carried out by generating the aligned characteristics of the countermeasure network GAN, the virtual clothing is connected with the skeleton system of the virtual character model by the skeleton binding technology, the virtual clothing is fused with the head portrait of the user, the more real and vivid try-on effect is displayed, gesture recognition is carried out based on a sensor, and the clothing is displayed by gestures. The virtual fitting experience is more accurate, convenient and vivid, the dependence on the accuracy of the head portrait photo of the user in the traditional virtual fitting system is overcome, the shopping experience and the user satisfaction are improved, meanwhile, the return rate is reduced, and more opportunities and development potential are brought to the electronic commerce industry.
Drawings
Fig. 1 is a schematic diagram of a virtual fitting system and method based on a photographing uploading head portrait according to an embodiment of the present invention;
fig. 2 is an overall flow chart of the present invention.
Detailed Description
The following description of the embodiments of the present invention, taken in conjunction with the accompanying drawings, will clearly and completely describe the embodiments of the present invention, and it is evident that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
Aiming at the defects of the current virtual fitting system, the invention provides a virtual fitting method and a system based on head portrait uploading, which overcome the dependence on the accuracy of the head portrait photo of a user in the traditional virtual fitting system.
For easy understanding and explanation, as shown in fig. 1 and 2, the invention provides a virtual fitting method based on uploading head portraits, comprising the following steps:
step S1: uploading a head portrait picture of a user, and shaping the shape and the form of the head portrait picture by using three-dimensional modeling software to generate a virtual character model of the user.
In this step, uploading the head portrait picture of the user includes the following steps:
step S11: providing a user interface, and selecting to take a picture or select an avatar picture from the album through the user interface.
Step S12: and adjusting the size and converting the format of the selected head portrait picture through image preprocessing to obtain an image for three-dimensional construction model recognition and analysis.
Wherein, the image preprocessing includes: first, an avatar image uploaded by a user is preprocessed to extract effective information. This may include removing noise, resizing the image, and color space conversion operations so that subsequent processing steps can more accurately identify and analyze the image.
In this step, the three-dimensional modeling software is used to shape and shape the head portrait picture, and a virtual character model of the user is generated, which includes the following steps:
wherein, human modeling: the three-dimensional modeling software is Autodesk Maya.
Step S13: modeling and modeling: the overall shape is shaped by stretching, scaling and deforming the avatar image through the basic geometry using Autodesk Maya three-dimensional modeling software, and details and features are added using engraving tools.
Step S14: an articulation point is selected for the virtual character model and bone properties are fused with the articulation point of the virtual character model, connecting the bones with various portions of the virtual character model.
Bone binding: bone binding on a character model is the process of connecting bones to various parts of the character model. This step typically requires manual or semi-automatic selection of the nodes and then assigning skeletal attributes to the vertices of the character model. The joint points generally correspond to joints of the human body, such as the head, neck, shoulders, elbows, knees, etc.
Step S15: texture mapping techniques are used to assign texture coordinates to each vertex or each face of the virtual character model and to make the virtual character more realistic by obtaining color or texture information corresponding to the surface of the virtual character model from the texture image.
Step S16: and obtaining smooth texture transition through texture interpolation on the texture coordinates of the vertexes or faces of the model surface.
Specifically, texture mapping: to increase the realism of the manikin, texture mapping techniques are used. In the sampling process, texture information is acquired from the corresponding positions of the texture image by using texture coordinates, and is applied to the corresponding points of the model surface. Since the texture coordinates of the vertices or faces of the model surface may be discretely distributed, texture interpolation is required to obtain a smooth texture transition effect, so that details such as skin, muscle, wrinkles and the like are added to make the model surface more realistic.
Step S2: and a face detection algorithm of Viola-Jones is used for the head portrait picture, a boundary box of a face area is identified according to different illumination and expression changes, and the position of the face is positioned in the head portrait picture.
Wherein, face detection: through the uploaded head portrait, the system can detect the face area in the head portrait image by using a face detection algorithm. The use of face detection algorithms includes feature-based methods (e.g., haar features and HOG features) and deep learning-based methods (e.g., convolutional neural networks). Facial features in the head portrait are detected to locate and mark the face region.
Step S3: extracting face characteristics of the head portrait picture after the face detection algorithm is used, and determining facial characteristic points in the head portrait picture of the user, wherein the method specifically comprises the following steps:
step S31: basic local or global features are extracted by using local binary pattern (Local Binary Patterns, LBP) and gradient direction histogram (Histogram of Oriented Gradient, HOG) methods, and texture, edge and shape information of a human face are captured.
Step S32: and carrying out proper data transformation on the extracted basic features, carrying out PCA dimension reduction on the transformed features, calculating a covariance matrix, and carrying out eigenvalue decomposition.
Step S33: and selecting the top k feature vectors which are most important as main components, and fusing the feature subjected to PCA dimension reduction with other basic features so as to determine all the features of the head portrait of the target person.
Step S4: and (3) aligning facial feature points in the user head portrait picture with the face of the virtual character model through Procludes analysis, and carrying out feature fusion by generating aligned features of the countermeasure network GAN on the basis of texture mapping.
In this step, aligning facial feature points in a user avatar picture with a face of a virtual character model includes the steps of:
step S41: facial feature points in the target character head image are similarly aligned with facial feature point locations of the virtual character model through rotation, scaling, and transformation operations.
This can be achieved by rotation, scaling, and transformation operations so that the facial feature point locations of the two are as similar as possible.
Step S42: and mapping the pixel values in the target character head image to the corresponding positions of the virtual character face model according to the information after the facial feature points are aligned.
Feature fusion is performed by generating a countermeasure network (GAN) based on texture mapping.
Step S5: rendering the clothing by using three-dimensional modeling software, constructing virtual clothing, and connecting the virtual clothing with a virtual character model or a skeletal system of a human body through a skeletal binding technology.
In this step, the garment is rendered by using three-dimensional modeling software to construct a virtual garment, comprising the steps of:
garment rendering: the geometrical shape and structure of the garment are created through a three-dimensional modeling technology, and then the instantaneity and fluency of virtual try-on are ensured through a physical simulation technology and a real-time rendering technology, so that the details of the virtual garment are more lifelike. The method comprises the following steps:
step S51: loading a three-dimensional model of the garment into a virtual fitting system, wherein the three-dimensional model of the garment is created by a garment designer or manufacturer using specialized three-dimensional modeling software, including details and shape information of the garment.
Step S52: texture coordinates for each vertex are obtained by UV mapping the garment model, sampling on the model.
Step S53: and acquiring corresponding texture data from the texture image according to the UV coordinates of the vertexes in the sampling process, and defining the appearance and the material characteristics of the garment surface.
Step S54: according to the acquired texture data, textures and materials are applied to the three-dimensional model surface of the garment, and a shader or a rendering engine is used for calculating illumination, shadow and reflection effects of the surface so as to enhance the fidelity of the garment.
Step S55: and the stiffness and damping coefficient of the springs and mass parameters of mass points are adjusted on the garment through the spring mass point system, so that the motion behavior of the garment is changed, and physical simulation and dynamic effects are carried out.
Wherein, the physical simulation and dynamic effect further comprises the steps of: adjusting the stiffness of the spring to control the elasticity degree of the garment; the mass of the particles is adjusted to change the weight sense of the clothing, and the dynamic effect of the clothing is directly adjusted.
In this step, in the Autodesk Maya modeling software, the vertex or surface of the virtual garment is connected to the skeletal system of the human body or the virtual character by a skeletal Binding technology (Skeleton-based Binding), usually by weight Binding, and the method comprises the following steps:
step S56: each vertex is assigned a weight value related to the bone, and the vertex of the garment model is pulled to deform by the action of the bone.
Step S57: after the grid binding is completed, the skin is subjected to weight adjustment, so that the motion of the skeleton of the person is ensured to be followed when the virtual garment moves, and the problem of unnatural deformation or penetration is avoided. This step requires adjusting the weight value of each vertex so that the garment remains consistent with the motion of the character model during the motion.
Step S6: through a physical gesture recognition technology based on a sensor, data such as acceleration, angular velocity, position and the like of a hand of a user are captured, and gesture recognition and interpretation are carried out through pattern recognition and algorithm processing.
Step S7: thus, the style, color and pattern of the garment can be selected by simple gestures and the virtual character model is rotated so that the user can view the effect of the garment from different angles.
After the virtual garment is connected with the skeletal system of the virtual character model through the skeletal binding technology, the method used by the invention further comprises the following steps:
and (3) clothing segmentation: and identifying the outline and boundary of the clothing by K-means clustering, grabCut algorithm and Canny algorithm based on edge detection, extracting the detail information of the clothing, and separating the clothing from the background and other parts of the human body.
Through Mask R-CNN, pre-trained models can be used as a basis and fine-tuned on I'm data set, during training, input images are passed to the network, which will output segmentation masks for virtual characters and clothing. When the network training is completed, it can be applied to the new image for prediction.
Step S8: the image to be segmented is input and a Mask R-CNN network model is used to generate segmentation masks for virtual characters and garments.
The output of the segmentation network is a binary mask image, where the foreground represents the virtual character and clothing region and the background represents the other region.
Step S9: connected region analysis is used to find connected regions of virtual characters and apparel, and additional corrections and filtering are made according to some criteria (e.g., region size, shape, etc.).
Step S10: and visually displaying the segmentation result, generating a segmented image, and displaying the virtual character and the clothing separately. The segmentation results may also be applied to other applications, such as virtual fitting systems to present different styles of clothing.
The invention also comprises a virtual try-on system based on shooting uploading head portraits, which comprises the following steps:
and the modeling module is used for uploading the head portrait picture of the user, and shaping the shape and the form of the head portrait picture by using three-dimensional modeling software to generate a virtual character model of the user.
And the face detection module is used for using a face detection algorithm to the head portrait picture, identifying the boundary frame of the face area according to different illumination and expression changes, and positioning the face position in the head portrait picture.
And the facial feature extraction module is used for extracting facial features of the head portrait picture obtained by using the face detection algorithm and determining facial feature points in the head portrait of the target person.
And the face fusion module is used for aligning the facial feature points in the head image of the target character with the face of the virtual character model through Procludes analysis and carrying out feature fusion by generating an antagonism network GAN on the basis of texture mapping.
The virtual fitting module is used for rendering the clothing by using three-dimensional modeling software, constructing virtual clothing, and connecting the virtual clothing with a skeletal system of a human body through a skeletal binding technology.
The gesture recognition module is used for capturing the acceleration, angular velocity and position data of the hand of the user through a physical gesture recognition technology based on a sensor, and recognizing and explaining the gesture through pattern recognition and algorithm processing.
And the user interaction module is used for selecting the style, the color and the pattern of the clothing according to the gestures, rotating the virtual character model and checking the effect of wearing the clothing.
The present invention has been described in further detail with reference to specific preferred embodiments, and it should be understood by those skilled in the art that the present invention may be embodied with several simple deductions or substitutions without departing from the spirit of the invention.
Claims (10)
1. The virtual fitting method based on the uploading head portrait is characterized by comprising the following steps of:
uploading a head portrait picture of a user, and shaping the shape and the form of the head portrait picture by using three-dimensional modeling software to generate a virtual character model of the user;
a face detection algorithm is used for the head portrait picture, a boundary frame of a face area is recognized according to different illumination and expression changes, and the position of a face is positioned in the head portrait picture;
extracting facial features of the head portrait pictures after the face detection algorithm is used, and determining facial feature points in the head portrait pictures of the user;
aligning facial feature points in the user head portrait picture with the face of the virtual character model through Procludes analysis, and carrying out feature fusion on the aligned features by generating an antagonistic network GAN on the basis of texture mapping;
rendering the clothing by using three-dimensional modeling software, constructing virtual clothing, and connecting the virtual clothing with a skeleton system of a virtual character model through a skeleton binding technology;
capturing acceleration, angular velocity and position data of a hand of a user through a physical gesture recognition technology based on a sensor, and recognizing and explaining the gesture through pattern recognition and algorithm processing;
and selecting the clothes style, the colors and the patterns according to the gestures, and rotating the virtual character model to check the effect of wearing the clothes.
2. The virtual try-on method based on uploading a head portrait of claim 1, wherein uploading the head portrait picture of the user comprises the following steps:
providing a user interface, and selecting to take a picture or to select an avatar picture from the album through the user interface;
and adjusting the size and converting the format of the selected head portrait picture through image preprocessing to obtain an image for three-dimensional construction model recognition and analysis.
3. The method for virtual try-on based on uploading an avatar, as set forth in claim 1, wherein the using three-dimensional modeling software to shape and shape the avatar picture, generating a virtual character model of the user, comprises the steps of:
shaping the overall shape of the avatar image through stretching, scaling and deforming of the basic geometry using three-dimensional modeling software, and adding details and features using engraving tools;
selecting an articulation point for the virtual character model, fusing skeleton attributes with the articulation point of the virtual character model, and connecting the skeleton with each part of the virtual character model;
assigning texture coordinates to each vertex or each face of the virtual character model using texture mapping techniques and obtaining color or texture information corresponding to the virtual character model surface from the texture image;
and obtaining smooth texture transition from texture coordinates of the vertexes or faces of the model surface through texture interpolation.
4. The virtual fitting method based on uploading an avatar according to claim 1, wherein the step of extracting facial features of the avatar picture after using a face detection algorithm and determining facial feature points in the avatar picture of the user comprises the following steps:
extracting basic characteristics by using a local binary pattern or gradient direction histogram method, and capturing texture, edge and shape information of a human face;
carrying out proper data transformation on the extracted basic features, carrying out PCA dimension reduction on the transformed basic features, calculating a covariance matrix, and carrying out feature value decomposition on the basic features;
and selecting the top k feature vectors which are most important as main components, and fusing the feature subjected to PCA dimension reduction with other basic features so as to determine all the features of the head portrait of the target person.
5. The method for virtual try-on based on an uploading head portrait of claim 1, wherein the step of aligning facial feature points in a user head portrait picture with a face of a virtual character model includes the steps of:
performing similar alignment on facial feature points in the user avatar picture and facial feature point positions of the virtual character model through rotation, scaling and transformation operations;
and mapping the pixel values in the target character head image to the corresponding positions of the virtual character face model according to the information after the facial feature points are aligned.
6. The virtual try-on method based on the uploading head portrait according to claim 1, wherein the rendering of the clothing using the three-dimensional modeling software, the construction of the virtual clothing, comprises the steps of:
loading a three-dimensional model of the garment into a virtual fitting system, wherein the three-dimensional model of the garment is created by a garment designer or manufacturer using professional three-dimensional modeling software, and contains details and shape information of the garment;
carrying out UV mapping on the three-dimensional model of the garment, and sampling on the three-dimensional model to obtain texture coordinates of each vertex;
acquiring corresponding texture data from the texture image according to the UV coordinates of the vertexes in the sampling process, and defining the appearance and material characteristics of the garment surface;
according to the acquired texture data, applying textures and materials to the three-dimensional model surface of the garment, and calculating illumination, shadow and reflection effects of the surface by using a shader or a rendering engine;
and adjusting the rigidity, damping coefficient and mass parameter of mass points of the spring on the garment through a spring mass point system in the three-dimensional modeling software, changing the motion behavior of the garment, and performing physical simulation of the motion behavior of the garment and dynamic effect adjustment of the motion of the garment.
7. The virtual try-on method based on the uploading head portrait of claim 6, wherein the physical simulation of the clothing sports behavior and the dynamic effect adjustment of the clothing sports include the following steps:
the elasticity degree of the garment is controlled by adjusting the rigidity of the spring;
the weight sense of the clothing is changed by adjusting the mass of the particles, and the dynamic effect of the clothing during movement is adjusted.
8. The virtual try-on method based on the uploading head portrait of claim 1, wherein the connecting the virtual garment with the skeletal system of the virtual character model through the skeletal binding technology comprises the following steps:
assigning a weight value related to the skeleton to each vertex of the virtual garment, and dragging the vertex of the garment model to deform through the action of the skeleton;
the skin is weighted to promote the virtual clothes to follow the motions of the bones of the figures when in motion.
9. The virtual try-on method based on the uploading head portrait of claim 1, further comprising the steps of:
inputting an image to be segmented into a Mask R-CNN network model to generate a segmentation Mask of a virtual character and clothing;
using the connected region analysis to find the connected region of the virtual character and the clothing, and carrying out additional correction and filtering according to the criteria to obtain a segmentation result;
and visualizing the segmentation result, generating a segmented image, and displaying the virtual character and the clothing separately.
10. A virtual try-on system based on shooting uploading head portrait is characterized by comprising:
the modeling module is used for uploading the head portrait picture of the user, and modeling the shape and the form of the head portrait picture by using three-dimensional modeling software to generate a virtual character model of the user;
the face detection module is used for using a face detection algorithm for the head portrait picture, identifying a boundary box of a face area according to different illumination and expression changes, and positioning the position of the face in the head portrait picture;
the facial feature extraction module is used for extracting facial features of the head portrait picture after the face detection algorithm is used, and determining facial feature points in the head portrait picture of the user;
the face fusion module is used for aligning facial feature points in the head portrait picture of the user with the face of the virtual character model through Procrustes analysis, and carrying out feature fusion by generating features aligned by the countermeasure network GAN on the basis of texture mapping;
the virtual fitting module is used for rendering the clothing by using three-dimensional modeling software, constructing virtual clothing, and connecting the virtual clothing with a skeleton system of the virtual character model through a skeleton binding technology;
the gesture recognition module is used for capturing the acceleration, angular velocity and position data of the hand of the user through a physical gesture recognition technology based on a sensor, and recognizing and explaining the gesture through pattern recognition and algorithm processing;
and the user interaction module is used for selecting the style, the color and the pattern of the clothing according to the gestures, rotating the virtual character model and checking the effect of putting on the clothing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311523000.5A CN117237583B (en) | 2023-11-16 | 2023-11-16 | Virtual fitting method and system based on uploading head portrait |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311523000.5A CN117237583B (en) | 2023-11-16 | 2023-11-16 | Virtual fitting method and system based on uploading head portrait |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117237583A true CN117237583A (en) | 2023-12-15 |
CN117237583B CN117237583B (en) | 2024-02-09 |
Family
ID=89086538
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311523000.5A Active CN117237583B (en) | 2023-11-16 | 2023-11-16 | Virtual fitting method and system based on uploading head portrait |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117237583B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104021589A (en) * | 2014-06-27 | 2014-09-03 | 江苏中佑石油机械科技有限责任公司 | Three-dimensional fitting simulating method |
CN104268763A (en) * | 2014-09-30 | 2015-01-07 | 江苏中佑石油机械科技有限责任公司 | Three-dimensional fitting marketing platform |
CN104794441A (en) * | 2015-04-15 | 2015-07-22 | 重庆邮电大学 | Human face feature extracting method based on active shape model and POEM (patterns of oriented edge magnituedes) texture model in complicated background |
CN108885794A (en) * | 2016-01-27 | 2018-11-23 | 尼廷·文斯 | The virtually trying clothes on the realistic human body model of user |
CN110543826A (en) * | 2019-08-06 | 2019-12-06 | 尚尚珍宝(北京)网络科技有限公司 | Image processing method and device for virtual wearing of wearable product |
CN114663199A (en) * | 2022-05-17 | 2022-06-24 | 武汉纺织大学 | Dynamic display real-time three-dimensional virtual fitting system and method |
-
2023
- 2023-11-16 CN CN202311523000.5A patent/CN117237583B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104021589A (en) * | 2014-06-27 | 2014-09-03 | 江苏中佑石油机械科技有限责任公司 | Three-dimensional fitting simulating method |
CN104268763A (en) * | 2014-09-30 | 2015-01-07 | 江苏中佑石油机械科技有限责任公司 | Three-dimensional fitting marketing platform |
CN104794441A (en) * | 2015-04-15 | 2015-07-22 | 重庆邮电大学 | Human face feature extracting method based on active shape model and POEM (patterns of oriented edge magnituedes) texture model in complicated background |
CN108885794A (en) * | 2016-01-27 | 2018-11-23 | 尼廷·文斯 | The virtually trying clothes on the realistic human body model of user |
CN110543826A (en) * | 2019-08-06 | 2019-12-06 | 尚尚珍宝(北京)网络科技有限公司 | Image processing method and device for virtual wearing of wearable product |
CN114663199A (en) * | 2022-05-17 | 2022-06-24 | 武汉纺织大学 | Dynamic display real-time three-dimensional virtual fitting system and method |
Also Published As
Publication number | Publication date |
---|---|
CN117237583B (en) | 2024-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yang et al. | Physics-inspired garment recovery from a single-view image | |
US10546433B2 (en) | Methods, systems, and computer readable media for modeling garments using single view images | |
Yang et al. | Detailed garment recovery from a single-view image | |
Achenbach et al. | Fast generation of realistic virtual humans | |
Alldieck et al. | Video based reconstruction of 3d people models | |
CN114663199B (en) | Dynamic display real-time three-dimensional virtual fitting system and method | |
Hasler et al. | Multilinear pose and body shape estimation of dressed subjects from image sets | |
CN109427007B (en) | Virtual fitting method based on multiple visual angles | |
Guan et al. | Drape: Dressing any person | |
EP2686834B1 (en) | Improved virtual try on simulation service | |
KR101896137B1 (en) | Generation of avatar reflecting player appearance | |
Ersotelos et al. | Building highly realistic facial modeling and animation: a survey | |
KR20180069786A (en) | Method and system for generating an image file of a 3D garment model for a 3D body model | |
US11587288B2 (en) | Methods and systems for constructing facial position map | |
CN104123749A (en) | Picture processing method and system | |
US11562536B2 (en) | Methods and systems for personalized 3D head model deformation | |
CN111553284A (en) | Face image processing method and device, computer equipment and storage medium | |
Wenninger et al. | Realistic virtual humans from smartphone videos | |
US20240029345A1 (en) | Methods and system for generating 3d virtual objects | |
WO2014081394A1 (en) | Method, apparatus and system for virtual clothes modelling | |
US11417053B1 (en) | Methods and systems for forming personalized 3D head and facial models | |
US11461970B1 (en) | Methods and systems for extracting color from facial image | |
Chen et al. | Unpaired pose guided human image generation | |
Bang et al. | Estimating garment patterns from static scan data | |
Zheng et al. | Image-based clothes changing system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |