CN107798713B - Image deformation method for two-dimensional virtual fitting - Google Patents

Image deformation method for two-dimensional virtual fitting Download PDF

Info

Publication number
CN107798713B
CN107798713B CN201710786626.3A CN201710786626A CN107798713B CN 107798713 B CN107798713 B CN 107798713B CN 201710786626 A CN201710786626 A CN 201710786626A CN 107798713 B CN107798713 B CN 107798713B
Authority
CN
China
Prior art keywords
deformation
human body
clothing
body posture
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710786626.3A
Other languages
Chinese (zh)
Other versions
CN107798713A (en
Inventor
刘骊
周磊
付晓东
黄青松
刘利军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan Youmai Technology Co.,Ltd.
Original Assignee
Kunming University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming University of Science and Technology filed Critical Kunming University of Science and Technology
Priority to CN201710786626.3A priority Critical patent/CN107798713B/en
Publication of CN107798713A publication Critical patent/CN107798713A/en
Application granted granted Critical
Publication of CN107798713B publication Critical patent/CN107798713B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to an image deformation method for two-dimensional virtual try-on, and belongs to the field of computer graphic images. Firstly, establishing a Gaussian mixture model for an input initial human body posture clothing image and an input target human body posture image, and identifying a clothing region and a human body posture by adopting a GrabCT segmentation algorithm; secondly, a least square object space deformation method is adopted to maintain detail deformation of the clothing area and the human body posture; then, reducing non-rigid energy in the whole deformation process through a Dijkstra shortest path algorithm to obtain an initial deformation result; and finally, adjusting the clothing outline by adopting an iterative moving least square object space deformation method which is as rigid as possible to obtain the final two-dimensional virtual fitting effect. The image deformation method for the two-dimensional virtual fitting adopted by the invention has the advantages of low complexity and cost and the like, and can intuitively display the virtual fitting effect under different human body postures.

Description

Image deformation method for two-dimensional virtual fitting
Technical Field
The invention relates to an image deformation method for two-dimensional virtual try-on, and belongs to the field of computer graphic images.
Background
Apparel, as the earliest category of goods involved in e-commerce, has become the largest, well-developed industry. The online clothing sale has many advantages which are not possessed by the traditional mode, so that the user can fully enjoy the fun and interactive experience of online shopping, and the online clothing sale has great market value and economic benefit. However, only the text introduction and the picture display cannot provide good shopping experience for users, and further development of online clothing sales is restricted. For this reason, the virtual fitting technique is highly concerned by the electric merchants such as Taobao, Jingdong, Amazon, and the like. The fit condition of the virtual clothes can reduce the goods return rate of buying the clothes which are not fit, and the matching effect of the virtual fitting can make up the difference between the clothes and the fitting effect. In recent years, many two-dimensional and three-dimensional virtual fitting systems have been developed. However, these systems only see the fitting effect of the proper style and color, and cannot make the user really feel the material and detail of the fabric and whether to fit, and there are limitations in usability and applicability. In terms of technology, there are still some key problems that are not solved well in three-dimensional virtual fitting, which are mainly reflected in 3 aspects: (1) because the three-dimensional modeling core algorithm has high complexity, the modeling of the human body garment needs a large amount of data support, the processing time is long, and the real-time performance of the system is limited; (2) the simulation effects of the three-dimensional clothing model on the aspects of texture, draping and the like lack sense of reality, and the simulation of the effects of extrusion, collision, stretching and the like between the human body model and the clothing model and between the clothing and the clothing model is complex; (3) a high-quality three-dimensional virtual fitting system is constructed, expensive external equipment (such as a three-dimensional human body scanner, a high-definition camera and the like) and corresponding auxiliary software are often required to be used as supports, and the cost is high. The exhibition of the E-Commerce website for the clothes has the remarkable characteristics of large batch of clothes, short updating period and the like, and more two-dimensional virtual try-on exhibition is adopted. The two-dimensional virtual try-on has the advantages of fast loading, real effect, low cost and the like, and is a mainstream display technology in the field of E-Commerce of the current clothing.
The precondition of two-dimensional virtual fitting is image deformation, and the key problem of improving the virtual fitting effect is that the accuracy of the image deformation is improved. The known image deformation method is mainly realized by deforming thousands of pixel points in real time. For example, Schaefer (< ACM trans. graph >, 25(3), 2006, 533 to 540) proposes an image space deformation system based on MLS to perform image deformation. Weng (< vis. comput >, 22, 2006, 653-660) proposes a two-dimensional deformation system using a nonlinear least squares optimization method, and image deformation is performed using laplacian coordinates of the inside and the boundary of a target region. These methods have high requirements for input images and have great limitations. Moreover, the known two-dimensional virtual fitting system still has certain defects and limitations: the fitting nature and the actual dress effect of dress are hardly experienced to the try-on user, and the obvious try-on effect often is the try-on effect of single angle, the try-on effect of two angles in front or the back promptly to human gesture relatively fixed etc. the try-on experience is relatively poor. The deformation method adopts an iterative moving least square object space deformation method which is as rigid as possible, and different two-dimensional virtual fitting effects can be obtained according to different human body postures.
Disclosure of Invention
The invention provides an image deformation method for two-dimensional virtual fitting, which can obtain different two-dimensional virtual fitting effects according to different human body postures.
The technical scheme of the invention is as follows: firstly, establishing a Gaussian mixture model for an input initial human body posture clothing image and a target human body posture image, and identifying a clothing region and a human body posture by adopting a GrabCT segmentation algorithm; secondly, a least square object space deformation method is adopted to maintain detail deformation of the clothing area and the human body posture; then, reducing non-rigid energy in the whole deformation process through a Dijkstra shortest path algorithm to obtain an initial deformation result; and finally, adjusting the clothing outline by adopting an iterative moving least square object space deformation method which is as rigid as possible to obtain the final two-dimensional virtual fitting effect.
The method comprises the following specific steps:
step1, establishing a Gaussian mixture model for the input initial human body posture clothing image and the input target human body posture image, and identifying a clothing region and a human body posture through iterative segmentation by adopting a GrabCT segmentation algorithm;
step2, constructing a deformation function from the initial position P of each pixel point in the clothing region to the corresponding human body posture target position Q through a least square object space deformation method, establishing a rotation matrix, moving the pixel points from the initial position P to the target position Q, and performing detail keeping deformation on the clothing region and the human body posture to obtain a non-rigid deformation result;
step3, selecting the shortest moving path for the pixel point by adopting a Dijkstra shortest path algorithm, reducing the non-rigid energy in the whole deformation process, improving the moving accuracy of the pixel point, continuously calibrating and updating the moving position of the pixel point, and obtaining an initial deformation result;
step4, adjusting the fitting clothing contour by adopting an iterative moving least square object space deformation method with rigidity as much as possible according to the human body contour and based on the initial clothing deformation result, and finally obtaining a two-dimensional virtual fitting effect.
The invention has the beneficial effects that: in the invention, the iterative moving least square object space deformation method which is as rigid as possible is adopted in consideration of the conditions that the pixel point processing is complex and the requirement on the input image is high, so that the efficiency is high.
The image deformation method for the two-dimensional virtual fitting solves the problem of single posture of a human body, achieves the effect of virtual fitting under different human body postures, keeps the details of clothes as much as possible, and has strong sense of reality of the obtained virtual fitting effect.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 shows the result of the garment and the posture of the human body;
FIG. 3 is a schematic diagram of deformation control points;
FIG. 4 is a schematic view of an initial deformation process;
FIG. 5 is a schematic diagram of the final effect of a two-dimensional virtual try-on;
FIG. 6 is a graph comparing the deformation results of the method of the present invention with other methods.
Detailed Description
Example 1: as shown in fig. 1, an image deformation method for two-dimensional virtual fitting includes establishing a gaussian mixture model for an input initial human body posture clothing image and a target human body posture image, and identifying a clothing region and a human body posture by using a crabtut segmentation algorithm; secondly, a least square object space deformation method is adopted to maintain detail deformation of the clothing area and the human body posture; then, reducing non-rigid energy in the whole deformation process through a Dijkstra shortest path algorithm to obtain an initial deformation result; and finally, adjusting the clothing outline by adopting an iterative moving least square object space deformation method which is as rigid as possible to obtain the final two-dimensional virtual fitting effect.
The method comprises the following specific steps:
step1, establishing a Gaussian mixture model for the input initial human body posture clothing image and the input target human body posture image, and identifying a clothing region and a human body posture through iterative segmentation by adopting a GrabCT segmentation algorithm;
step2, constructing a deformation function from the initial position P of each pixel point in the clothing region to the corresponding human body posture target position Q through a least square object space deformation method, establishing a rotation matrix, moving the pixel points from the initial position P to the target position Q, and performing detail keeping deformation on the clothing region and the human body posture to obtain a non-rigid deformation result;
step3, selecting the shortest moving path for the pixel point by adopting a Dijkstra shortest path algorithm, reducing the non-rigid energy in the whole deformation process, improving the moving accuracy of the pixel point, continuously calibrating and updating the moving position of the pixel point, and obtaining an initial deformation result;
step4, adjusting the fitting clothing contour by adopting an iterative moving least square object space deformation method with rigidity as much as possible according to the human body contour and based on the initial clothing deformation result, and finally obtaining a two-dimensional virtual fitting effect.
Example 2: the method comprises the following specific steps:
step1, establishing a Gaussian mixture model for the input initial human body posture clothing image and the input target human body posture image, and identifying a clothing region and a human body posture through iterative segmentation by adopting a GrabCT segmentation algorithm;
assume that the input initial body pose clothing image consists of pixels ZnComposition, where N ∈ {1,2, …, N }, where N denotes the number of pixels, pixel ZnRepresented by the RGB color space. To facilitate the establishment of the GMM model for each pixel, the GrabCut segmentation algorithm introduces the vector s ═ { s }1,s2,…sNAs a parameter for each pixel. At the same time, algorithm introduction
Figure BDA0001398218790000041
The image background and the image foreground are respectively expressed by the image segmentation method, and the segmentation result of the final image can be expressed by arrays
Figure BDA0001398218790000042
To indicate. Each GMM model is a mixture of K gaussian models, where K is 5 in this example, and the partitioned Gibbs energy is expressed as follows according to the variable K of the GMM model:
E(α,k,θ,z)=U(α,k,θ,z)+V(α,z)
in the Gibbs energy equation, α represents the transparency of an image pixel, θ represents a GMM color model parameter, E represents Gibbs energy, U represents a data item, and V represents a smoothing item.
While taking into account the color GMM model, data item U is defined as:
Figure BDA0001398218790000043
D(αn,kn,θ,zn)=-log p(znn,kn,θ)-logπ(αn,kn)
where p (-) represents a Gaussian probability distribution and π (-) represents a mixture weight coefficient, thus:
D(αn,kn,θ,zn)=-logπ(αn,kn)+0.5log det∑(αn,kn)+0.5[zn-ξ(αn+kn)]T∑(αn,kn)-1[zn-ξ(αn,kn)]
from this, it can be seen that the parameter θ in the model becomes:
θ={π(α,k),μ(α,k),∑(α,k),α=0,1.k=1…K}
the smoothing term V can be found by the euclidean distance of the color space:
Figure BDA0001398218790000051
where ξ represents the mean vector, γ represents the neighborhood center, β represents the neighborhood radius, and m and N are both any number of pixels in N. The above steps are repeatedly executed until the algorithm converges, and in this example, the variable k of the GMM model is 43, so that the obtained convergence effect is good. At this time, the image segmentation result, i.e., the garment region, can be obtained.
Because the human posture images are shot under the background of the pure curtain, the size of the images is set to be 290 multiplied by 425 pixel standard for convenient calculation, simple manual interaction can be carried out when the GrabCT segmentation algorithm is adopted to extract the human posture, the algorithm execution efficiency is improved, and the specific segmentation steps are as shown above. The results of the segmentation of the given part of the garment and the given body position are shown in fig. 2.
In order to improve the accuracy and the processing efficiency of the garment deformation, the invention carries out gridding processing on the obtained garment and human body posture images, and adds internal deformation control points in a man-machine interaction mode according to human body characteristic regions (such as neck, shoulder, chest, elbow joint, wrist joint, abdomen, hip, knee, ankle and the like), and adds external deformation control points on the garment and human body posture outlines as shown in figure 3, so that the deformation process can be summarized as alignment of the garment and the control points of the human body posture and corresponding migration processing of adjacent gridding nodes, and the garment is integrally deformed by using the internal control points in the garment deformation process keeping details to obtain an initial deformation result; in the adjustment of the external outline of the clothing, the external control point is used for locally adjusting the outline of the clothing to obtain the final virtual fitting effect.
As shown in fig. 4, the initial deformation process is to input the garment and the human body posture image with the control points added, as shown in fig. 4(a), then align and register the garment with the human body image by using the internal control points, as shown in fig. 4(b), and finally perform internal control point deformation processing on the unaligned garment parts, as shown in steps 2 and Step 3:
step2, constructing a deformation function from the initial position P of each pixel point in the clothing region to the corresponding human body posture target position Q through a least square object space deformation method, establishing a rotation matrix, moving the pixel points from the initial position P to the target position Q, and performing detail keeping deformation on the clothing region and the human body posture to obtain a non-rigid deformation result;
the image is first represented as set J ═ μ, η), where
Figure BDA0001398218790000052
Representing the set of all mesh nodes of the image, η ═ eijE.g. mu x mu represents the edge connecting every two mesh nodes i and j, and at the same time, let
Figure BDA0001398218790000053
And
Figure BDA0001398218790000054
respectively representing the original and deformed mesh nodes,
Figure BDA0001398218790000055
and
Figure BDA0001398218790000056
respectively representing the initial position of the node and the deformed target position, and therefore, for each node vεConstructing a local minimum deformation function: gamma rayε:R2→R2The formula is as follows:
Figure BDA0001398218790000061
wherein d is R2X v → R is represented in R2R → R represents a non-negative monotonically decreasing weight function. In this example, the number of mesh nodes Φ is 2000, and the result of rigid deformation of the garment image is obtained through calculation.
Step3, selecting the shortest moving path for the pixel point by adopting Dijkstra shortest path algorithm, reducing the non-rigid energy in the whole deformation process, improving the moving accuracy of the pixel point, continuously calibrating and updating the moving position of the pixel point, and obtaining the initial deformation result, wherein the process is as follows:
first, using the rigid deformation results, for each node unit CETo obtain its spinRotation matrix OE
Secondly, for each node unit CECalculate the remainder thereof
Figure BDA0001398218790000062
And then constructing and solving a sparse linear equation set Lq ═ g of the sparse linear equation set, wherein L is a discrete Laplace operator, and q is contained in an unknown position qEVector of (a), py、pERespectively representing the coordinates of any two adjacent grid nodes, and the vector g contains each q to the right of the corresponding equationE
Finally, the above process is repeatedly executed until all the nodes move to the target positions, and an initial deformation result is obtained, as shown in fig. 4 (c).
Step4, adjusting the fitting clothing contour by adopting an iterative moving least square object space deformation method with rigidity as much as possible according to the human body contour and based on the initial clothing deformation result, and finally obtaining a two-dimensional virtual fitting effect. The specific process is as follows:
let H be an element of R2Representing the node positions of the outline of one garment image,
Figure BDA0001398218790000063
χle H represents two adjacent undeformed node positions in P, then
Figure BDA0001398218790000064
And λ > 0. For the relative coordinates in H, the inner product is given as follows:
Figure BDA0001398218790000071
here, the
Figure BDA0001398218790000072
And is
Figure BDA0001398218790000073
Is xnThe global coordinates of the orthonormal base of (2) can be obtained by the following formula:
Figure BDA0001398218790000074
order to
Figure BDA00013982187900000711
Hexix-l' separately represent
Figure BDA0001398218790000076
Hexix-lAt the new location on the grid, the location of the new node can be obtained by the following formula:
Figure BDA0001398218790000077
here, the
Figure BDA0001398218790000078
And is
Figure BDA0001398218790000079
This completes the adjustment of the contour deformation of the garment, and obtains a two-dimensional virtual fitting effect, as shown in fig. 4 (d). A partial virtual try-on visual effect is given, see fig. 5.
In order to better illustrate that the method has a good effect on two-dimensional garment deformation, the method is compared with other deformation methods (a rigid deformation method and a linear moving least square method) and the comparison result is shown in figure 6, and table 1 shows that the garment image deformation method adopting the method has shorter processing time and higher algorithm efficiency than other methods.
TABLE 1 comparison of processing times for different methods
Figure BDA00013982187900000710
While the present invention has been described in detail with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art.

Claims (1)

1. A two-dimensional virtual try-on oriented image deformation method is characterized by comprising the following steps: the method comprises the following steps:
step1, establishing a Gaussian mixture model for the input initial human body posture clothing image and the input target human body posture image, and identifying a clothing region and a human body posture by adopting a GrabCT segmentation algorithm;
step2, performing detail keeping deformation on the clothing area and the human body posture by adopting a least square object space deformation method;
the specific process of Step2 is that a least square object space deformation method is used for constructing a deformation function from an initial position P of each pixel point in the clothing region to a corresponding human body posture target position Q, a rotation matrix is established, the pixel points are moved from the initial position P to the target position Q, detail deformation is kept on the clothing region and the human body posture, and a non-rigid deformation result is obtained;
step3, reducing non-rigid energy in the whole deformation process through a Dijkstra shortest path algorithm to obtain an initial deformation result;
in Step3, a dijkstra shortest path algorithm is adopted to select a shortest moving path for a pixel point, non-rigid energy in the whole deformation process is reduced, the moving accuracy of the pixel point is improved, the moving position of the pixel point is continuously calibrated and updated, and an initial deformation result is obtained, wherein the process is as follows:
first, using the rigid deformation results, for each node unit CEObtaining its rotation matrix OE
Secondly, for each node unit CECalculate the remainder thereof
Figure FDA0002824961650000011
And then constructing and solving a sparse linear equation set Lq ═ g of the sparse linear equation set, wherein L is a discrete Laplace operator, and q is contained in an unknown position qEVector of (a), py、pERespectively representing the coordinates of any two adjacent grid nodes, and the vector g contains each q to the right of the corresponding equationE
Finally, the above process is repeatedly executed until all the nodes move to the target positions, and an initial deformation result is obtained;
step4, adjusting the clothing outline by adopting an iterative moving least square object space deformation method which is as rigid as possible to obtain a final two-dimensional virtual fitting effect;
in Step4, the fitting clothing contour is adjusted by adopting an iterative moving least square object space deformation method with rigidity as much as possible according to the human body contour and based on the initial clothing deformation result, and a two-dimensional virtual fitting effect is finally obtained, wherein the specific process is as follows:
let H be an element of R2Representing the node positions of the outline of one garment image,
Figure FDA0002824961650000021
representing the positions of two adjacent undeformed nodes, then
Figure FDA0002824961650000022
And λ > 0, the inner product of which is given for the relative coordinates in H as follows:
Figure FDA0002824961650000023
wherein
Figure FDA0002824961650000024
And is
Figure FDA0002824961650000025
Its global coordinates can be obtained from the following formula:
Figure FDA0002824961650000026
order to
Figure FDA0002824961650000027
Hexix-l' separately represent
Figure FDA0002824961650000028
Hexix-lAt the new location on the grid, the location of the new node is then obtained by the following equation:
Figure FDA0002824961650000029
wherein
Figure FDA00028249616500000210
And is
Figure FDA00028249616500000211
CN201710786626.3A 2017-09-04 2017-09-04 Image deformation method for two-dimensional virtual fitting Active CN107798713B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710786626.3A CN107798713B (en) 2017-09-04 2017-09-04 Image deformation method for two-dimensional virtual fitting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710786626.3A CN107798713B (en) 2017-09-04 2017-09-04 Image deformation method for two-dimensional virtual fitting

Publications (2)

Publication Number Publication Date
CN107798713A CN107798713A (en) 2018-03-13
CN107798713B true CN107798713B (en) 2021-04-09

Family

ID=61532273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710786626.3A Active CN107798713B (en) 2017-09-04 2017-09-04 Image deformation method for two-dimensional virtual fitting

Country Status (1)

Country Link
CN (1) CN107798713B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829785B (en) * 2019-01-21 2021-07-09 深圳市云之梦科技有限公司 Virtual fitting method and device, electronic equipment and storage medium
CN110069195B (en) * 2019-01-31 2020-06-30 北京字节跳动网络技术有限公司 Image dragging deformation method and device
CN110176063B (en) * 2019-05-07 2022-05-27 浙江凌迪数字科技有限公司 Clothing deformation method based on human body Laplace deformation
CN111787242B (en) 2019-07-17 2021-12-07 北京京东尚科信息技术有限公司 Method and apparatus for virtual fitting
CN110473296B (en) * 2019-08-15 2023-09-26 浙江中国轻纺城网络有限公司 Mapping method and device
CN111062777B (en) * 2019-12-10 2022-06-24 中山大学 Virtual fitting method and system capable of retaining example clothes details
CN111275518B (en) * 2020-01-15 2023-04-21 中山大学 Video virtual fitting method and device based on mixed optical flow
CN112258269B (en) * 2020-10-19 2024-05-28 武汉纺织大学 Virtual fitting method and device based on 2D image
CN112232914B (en) * 2020-10-19 2023-04-18 武汉纺织大学 Four-stage virtual fitting method and device based on 2D image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933757A (en) * 2015-05-05 2015-09-23 昆明理工大学 Method of three-dimensional garment modeling based on style descriptor
CN106021603A (en) * 2016-06-20 2016-10-12 昆明理工大学 Garment image retrieval method based on segmentation and feature matching

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7559556B2 (en) * 2006-01-06 2009-07-14 Dana Automotive Systems Group, Llc MLS gasket compression limiter

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933757A (en) * 2015-05-05 2015-09-23 昆明理工大学 Method of three-dimensional garment modeling based on style descriptor
CN106021603A (en) * 2016-06-20 2016-10-12 昆明理工大学 Garment image retrieval method based on segmentation and feature matching

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Vision-related MLS Image Deformation Using saliency map;Yong Zhang等;《IEEE Xplore》;20110830;第193-198页 *
面向2D虚拟试穿的服装推理变形仿真方法;周千明等;《计算机工程与应用》;20160831;第52卷(第8期);第158-162、190页 *

Also Published As

Publication number Publication date
CN107798713A (en) 2018-03-13

Similar Documents

Publication Publication Date Title
CN107798713B (en) Image deformation method for two-dimensional virtual fitting
US11443480B2 (en) Method and system for remote clothing selection
US10922898B2 (en) Resolving virtual apparel simulation errors
US10546433B2 (en) Methods, systems, and computer readable media for modeling garments using single view images
US10997779B2 (en) Method of generating an image file of a 3D body model of a user wearing a garment
Mueller et al. Real-time hand tracking under occlusion from an egocentric rgb-d sensor
Huang et al. Block pattern generation: From parameterizing human bodies to fit feature-aligned and flattenable 3D garments
Robson et al. Context-aware garment modeling from sketches
US9626808B2 (en) Image-based deformation of simulated characters of varied topology
CN112784865A (en) Garment deformation using multiscale tiles to counteract loss of resistance
CN109196561A (en) System and method for carrying out three dimensional garment distortion of the mesh and layering for fitting visualization
Zhu et al. An efficient human model customization method based on orthogonal-view monocular photos
Gundogdu et al. Garnet++: Improving fast and accurate static 3d cloth draping by curvature loss
Jiang et al. Transferring and fitting fixed-sized garments onto bodies of various dimensions and postures
US10467791B2 (en) Motion edit method and apparatus for articulated object
Li et al. In-home application (App) for 3D virtual garment fitting dressing room
Yang et al. A virtual try-on system in augmented reality using RGB-D cameras for footwear personalization
Zheng et al. Image-based clothes changing system
Bang et al. Estimating garment patterns from static scan data
Wang et al. From designing products to fabricating them from planar materials
Shi et al. Automatic 3D virtual fitting system based on skeleton driving
CN110176063A (en) A kind of clothes deformation method based on human body Laplce deformation
Tisserand et al. Automatic 3D garment positioning based on surface metric
Roy et al. Incorporating human body shape guidance for cloth warping in model to person virtual try-on problems
Fadaifard et al. Image warping for retargeting garments among arbitrary poses

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220124

Address after: 650000 No. 2601, 26 / F, building 18, Derun Chuncheng garden, east of Xiaokang Avenue, Wuhua District, Kunming, Yunnan Province

Patentee after: Yunnan Youmai Technology Co.,Ltd.

Address before: 650093 No. 253, Xuefu Road, Wuhua District, Yunnan, Kunming

Patentee before: Kunming University of Science and Technology

TR01 Transfer of patent right