CN115293958A - Clothes deformation method, virtual fitting method and related device - Google Patents

Clothes deformation method, virtual fitting method and related device Download PDF

Info

Publication number
CN115293958A
CN115293958A CN202210821116.6A CN202210821116A CN115293958A CN 115293958 A CN115293958 A CN 115293958A CN 202210821116 A CN202210821116 A CN 202210821116A CN 115293958 A CN115293958 A CN 115293958A
Authority
CN
China
Prior art keywords
clothes
image
deformation
human body
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210821116.6A
Other languages
Chinese (zh)
Inventor
陈仿雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Original Assignee
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shuliantianxia Intelligent Technology Co Ltd filed Critical Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority to CN202210821116.6A priority Critical patent/CN115293958A/en
Publication of CN115293958A publication Critical patent/CN115293958A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application relates to the technical field of image processing, and discloses a clothes deformation method, a virtual fitting method and a related device. Therefore, the clothes deformation diagram obtained by combining the at least two deformation areas can adapt to the body state of the human body and is adaptive to the trunk of the human body, and the clothes fitting effect is improved. And moreover, each clothes area of the clothes image is correspondingly deformed, so that the deformed clothes texture trend is reasonable. In addition, the method does not need to train a model, and on one hand, the dependence of the sample data size is reduced; on the other hand, the combination after the deformation of the sub-regions is more suitable for the body state of the human body compared with the disposable deformation of the whole clothes.

Description

Clothes deformation method, virtual fitting method and related device
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a clothes deformation method, a virtual fitting method and a related device.
Background
With the continuous progress of modern science and technology, the online shopping scale is continuously increased, and a user can purchase clothes on an online shopping platform through a mobile phone, however, because the information of the clothes to be sold, which is obtained by the user, is generally a two-dimensional display picture, the user cannot know the effect of wearing the clothes on the user. Therefore, the demand for on-line try-on is becoming stronger, and dress display is becoming an important direction in the field of modern computer vision.
At present, a virtual fitting technology generally uses shooting of user images, selection of target clothes provided by a system, deformation processing of the target clothes, and automatic replacement of original clothes on a user by the deformed target clothes. However, the existing clothes deformation method has insufficient deformation capability, so that the deformed clothes are not harmonious with the human body of a user.
Disclosure of Invention
The technical problem mainly solved by the embodiments of the present application is to provide a clothes deformation method, a virtual fitting method and a related device, which can perform adaptive deformation on clothes according to individual body conditions, and the deformed clothes have reasonable grain trend, thereby being beneficial to improving fitting effect.
In order to solve the above technical problem, in a first aspect, an embodiment of the present application provides a method for deforming a garment, including:
acquiring a clothes image and a human body image;
performing region segmentation on trying to wear clothes in the clothes image to obtain at least two clothes regions;
detecting key points of a human body to obtain a plurality of key points;
finding out a plurality of target key point serial numbers corresponding to the clothes style of the try-on clothes and the target clothes area from a preset matching rule base, wherein the preset matching rule base comprises the corresponding relation among the clothes style, the clothes area and the key point serial numbers, and the target clothes area is any one of at least two clothes areas;
determining a plurality of corresponding target key points from the plurality of key points according to the sequence numbers of the target key points, and deforming the target clothes area according to the outlines indicated by the plurality of target key points to obtain a target deformed area;
and after the deformation of at least two clothes areas is completed, combining the obtained at least two deformation areas to obtain a clothes deformation graph.
In some embodiments, the performing region segmentation on the garment image to obtain at least two garment regions includes:
and performing region type analysis on the clothes image by adopting an analysis algorithm to obtain at least two clothes regions, wherein the at least two clothes regions comprise a left sleeve region, a left shoulder region, a right sleeve region, a right shoulder region, a back region or a chest region.
In some embodiments, the detecting of the human key points on the human body image to obtain a plurality of key points includes:
and detecting key points of the human body by adopting a preset dense key point detection model to obtain a plurality of key points, wherein the plurality of key points comprise key points on the outline of the trunk and key points on the central line of the trunk.
In some embodiments, the deforming the target clothing region according to the contour indicated by the plurality of target key points to obtain the target deformed region includes:
iteratively fitting a transformation matrix according to the edge coordinates of the target clothes area and the plurality of target key points;
and carrying out affine change on the target clothes area by adopting the transformation matrix to obtain a target deformation area.
In some embodiments, the method further comprises:
analyzing the human body image, and determining whether a clothes shielding area exists in the obtained human body analysis image;
and if the clothes shielding area exists, hiding clothes pixels located in the clothes shielding area in the clothes deformation graph.
In some embodiments, the hiding the clothes pixels located in the clothes occlusion region in the clothes deformation map includes:
acquiring a mask image of a clothes shielding area from a human body analytic graph;
calculating to obtain a final clothes deformation result by adopting the following formula;
P=W*(1-m)
wherein, P is the final clothes deformation result, W is the clothes deformation graph, and M is the mask image.
In some embodiments, the method further comprises:
acquiring a plurality of image groups, wherein the image groups comprise a clothes sample image and a model image, the model in the model image is worn with clothes in the clothes sample image, and the clothes sample image is marked with a clothes style;
performing region segmentation on the clothes sample image to obtain at least two clothes regions;
detecting key points of a human body on the model image to obtain a plurality of key points;
according to the matching relation between the clothes structure and the human body trunk, matching each clothes area corresponding to the clothes sample image with the key point corresponding to the model image respectively to obtain the corresponding relation between the clothes style, the clothes area and the key point sequence number;
and after the matching of the plurality of image groups is completed, obtaining a preset matching rule base.
In order to solve the above technical problem, in a second aspect, an embodiment of the present application provides a virtual fitting method, including:
deforming the clothes image by adopting the method of the first aspect to obtain a clothes deformation graph;
and fusing the clothes deformation image and the human body image to obtain a fitting image.
In order to solve the foregoing technical problem, in a third aspect, an embodiment of the present application provides an electronic device, including:
at least one processor, and
a memory communicatively coupled to the at least one processor, wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
In order to solve the above technical problem, in a fourth aspect, an embodiment of the present application provides a computer-readable storage medium storing computer-executable instructions for causing a computer device to perform the method of the first aspect.
The beneficial effects of the embodiment of the application are as follows: different from the situation of the prior art, the clothes deformation method, the fitting method and the related device provided by the embodiment of the application firstly acquire the clothes image and the human body image, and perform region segmentation on the fitting clothes in the clothes image to obtain at least two clothes regions. And detecting key points of the human body to obtain a plurality of key points. And finding out a plurality of target key point serial numbers corresponding to the clothes style of the try-on clothes and the target clothes area from a preset matching rule base. And determining a plurality of corresponding target key points from the plurality of key points according to the sequence numbers of the plurality of target key points, and deforming the target clothes area according to the outlines indicated by the plurality of target key points to obtain a target deformed area. And after the deformation of the at least two clothes areas is finished, combining the obtained at least two deformed areas to obtain a clothes deformation graph. In this embodiment, the preset matching rule base includes the corresponding relationship among the clothes style, the clothes area and the key point sequence number, wherein the key point sequence number reflects the human trunk area correspondingly attached to the clothes area, so that the corresponding attachment matching relationship between the human trunk area and the clothes area reflected by the preset matching rule base is utilized to split the try-on clothes into at least two clothes areas, the clothes areas are deformed according to the corresponding key points in the human body image, the deformation of each clothes area conforms to the corresponding human trunk area, and the human trunk in the human body image can be attached and matched. Therefore, the clothes deformation diagram obtained by combining the at least two deformation areas can adapt to the body state of the human body and is adaptive to the trunk of the human body, and the clothes fitting effect is improved. And moreover, each clothes area of the clothes image is correspondingly deformed, so that the deformed clothes texture trend is reasonable. In addition, the method does not need to train a model, on one hand, the dependence on the sample data volume is reduced, and the condition that the effect of the clothes deformation model is poor due to uncertain factors existing in the model training process is effectively avoided; on the other hand, the preset matching rule base constructed based on expert knowledge summary is combined after regional deformation, the deformation of each clothes region can take care of the local details of the trunk, and compared with the one-time deformation of the whole clothes, the clothes deformation more adaptive to the body state of the human body can be realized.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1 is a schematic view of an application scenario of a virtual fitting in some embodiments of the present application;
FIG. 2 is a schematic diagram of an electronic device according to some embodiments of the present application;
FIG. 3 is a schematic flow chart of a method of deforming a garment according to some embodiments of the present application;
FIG. 4 is a schematic illustration of a zoning garment of some embodiments of the present application;
FIG. 5 is a schematic representation of key points of a human body in some embodiments of the present application;
FIG. 6 is a schematic illustration of dense keypoints in some embodiments of the present application;
FIG. 7 is a schematic illustration of a zoning scheme for a garment according to some embodiments of the present application;
FIG. 8 is a sub-flowchart of step S50 of the method shown in FIG. 3;
FIG. 9 is a schematic view of a garment laid flat and in perspective in accordance with some embodiments of the present application;
FIG. 10 is a schematic flow chart of a method of deforming garments in accordance with some embodiments of the present application;
FIG. 11 is a schematic flow chart of a method of deforming a garment according to some embodiments of the present application;
fig. 12 is a schematic flow chart of a virtual fitting method according to some embodiments of the present application.
Detailed Description
The present application will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present application, but are not intended to limit the present application in any way. It should be noted that various changes and modifications can be made by one skilled in the art without departing from the spirit of the application. All falling within the scope of protection of the present application.
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that, if not conflicted, the various features of the embodiments of the present application may be combined with each other within the scope of protection of the present application. Additionally, while functional block divisions are performed in apparatus schematics, with logical sequences shown in flowcharts, in some cases, steps shown or described may be performed in sequences other than block divisions in apparatus or flowcharts. Further, the terms "first," "second," "third," and the like, as used herein, do not limit the data and the execution order, but merely distinguish the same items or similar items having substantially the same functions and actions.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
In addition, the technical features mentioned in the embodiments of the present application described below may be combined with each other as long as they do not conflict with each other.
Before describing the embodiments of the present application, a brief description is given to the garment deformation method known to the inventor of the present application, so that the embodiments of the present application will be easily understood later.
In some schemes, thin Plate Splines (TPS) are adopted to simulate deformation of clothes, and the Thin Plate Splines (TPS) can only perform simple deformation processing, so that the work limitation is large, the deformation processing of clothes with complicated styles cannot be adapted, and the deformation capability is obviously deficient.
In some solutions, the deformation of the garment is achieved by computing the trajectory of each pixel using optical flow techniques. The optical flow is the instantaneous speed of the pixel motion of a spatial moving object on an observation imaging plane, and is a method for finding the corresponding relation between the previous frame and the current frame by using the change of the pixels in an image sequence on a time domain and the correlation between adjacent frames so as to calculate the motion information of the object between the adjacent frames.
For example, the virtual fitting video generation method disclosed in related application CN114638754a obtains optical flow information between adjacent frames in a fitting video, corrects a first pixel coordinate image corresponding to each video frame of the fitting video by using the optical flow information, and corrects the first pixel coordinate image of each video frame by using the motion relationship between the pixel points in the adjacent frames because the optical flow information is the motion relationship between the pixel points in the adjacent frames in the fitting video, so that the motion relationship between the pixel points in the adjacent first pixel coordinate image in the first pixel coordinate image sequence can be made to conform to the motion relationship between the pixel points in the adjacent video frames in the fitting video, and thus the time sequence of a deformed clothes image sequence generated according to the first pixel coordinate image sequence is made stable, and the image of a clothes region in the finally generated virtual fitting video is stable and does not shake.
It is known that the deformation based on the optical flow is realized by compressing the clothes pixels. The method is easily interfered by the shielding characteristics of the clothes area (such as interference of bags, hairs and the like), the compression deformation of the clothes pixels can avoid the shielding area, for example, the clothes pixels corresponding to the area shielded by the hairs are not deformed, and the clothes pixels corresponding to other areas not shielded are deformed, so that the deformation is not uniform, the clothes texture is lost, and the condition that the texture deformation is unreasonable exists.
In order to solve the above problems, the present application provides a garment deformation method, a fitting method, and a related apparatus, in which the garment deformation method splits a fitting garment into at least two garment regions by using a corresponding fit matching relationship between a human torso region and a garment region reflected by a preset matching rule base, and performs deformation according to corresponding key points in a human body image, so that the deformation of each garment region conforms to the corresponding human torso region, and the human torso in the human body image can be fit and matched. Therefore, the clothes deformation diagram obtained by combining the at least two deformation areas can adapt to the body state of the human body and is adaptive to the trunk of the human body, and the clothes fitting effect is improved. And moreover, each clothes area of the clothes image is correspondingly deformed, so that the deformed clothes texture trend is reasonable. In addition, the method does not need to train a model, on one hand, the dependence on the sample data volume is reduced, and the condition that the effect of the clothes deformation model is poor due to uncertain factors existing in the model training process is effectively avoided; on the other hand, the preset matching rule base constructed based on expert knowledge summary is combined after regional deformation, the deformation of each clothes region can take care of the local details of the trunk, and compared with the one-time deformation of the whole clothes, the clothes deformation more adaptive to the body state of the human body can be realized.
When the clothes deformation image obtained by the clothes deformation method is applied to virtual fitting, the fitting effect is real and natural.
The following describes an exemplary application of the electronic device for clothes deformation or virtual fitting provided in the embodiments of the present application, and it is understood that the electronic device can perform both clothes deformation and virtual fitting.
The electronic device provided by the embodiment of the application can be a server, for example, a server deployed in the cloud. When the server is used for clothes deformation, calculation processing is carried out according to clothes images and human body images provided by other equipment or technicians in the field, so that the clothes are deformed according to the human body trunk structure, and clothes deformation images are obtained. And when the server is used for virtual fitting, the clothes deformation image and the human body image are subjected to fusion processing to obtain a fitting image.
The electronic device provided by some embodiments of the present application may be various types of terminals such as a notebook computer, a desktop computer, or a mobile device. When the terminal is used for clothes deformation, calculation processing is carried out according to clothes images and human body images provided by other equipment or technicians in the field, so that the clothes are deformed according to the human body trunk structure, and clothes deformation images are obtained. And when the terminal is used for virtual fitting, fusing the clothes deformation image and the human body image to obtain a fitting image.
By way of example, referring to fig. 1, fig. 1 is a schematic view of an application scenario of a virtual fitting system provided in an embodiment of the present application, and a terminal 10 is connected to a server 20 through a network, where the network may be a wide area network or a local area network, or a combination of the two.
The terminal 10 can be used to acquire a clothes image and a human body image, for example, a user inputs the clothes image and the human body image through an input interface, and the terminal automatically acquires the clothes image and the human body image after the input is completed; for another example, the terminal 10 includes a camera, and the camera captures a human body image, and a clothes image library is stored in the terminal 10, so that the user can select a clothes image from the clothes image library.
In some embodiments, the terminal 10 locally performs the clothes deformation method provided by the embodiment of the present application, and performs calculation processing on the clothes image and the human body image, so that the clothes are deformed according to the human body trunk structure, and a clothes deformation image is obtained. In some embodiments, the terminal 10 may also send the clothes image and the human body image to the server 20 through the network, and the server 20 receives the clothes image and the human body image, and performs calculation processing on the clothes image and the human body image, so that the clothes are deformed according to the human body trunk structure, and a clothes deformation image is obtained. Then, the clothes deformation image is sent to the terminal 10, and after receiving the clothes deformation image, the terminal 10 displays the image on an interface for the user to view.
In some embodiments, the terminal 10 locally executes the virtual fitting method provided in the embodiments of the present application to provide a virtual fitting clothes service for the user, and performs fusion processing on the clothes deformation image and the human body image to obtain a fitting image.
In some embodiments, the terminal 10 may also send the clothes deformation image and the human body image to the server 20 through the network, and after receiving the clothes deformation image and the human body image, the server 20 performs a fusion process on the clothes deformation image and the human body image to obtain a fitting image. The obtained fitting image is then transmitted to the terminal 10. After receiving the fitting image, the terminal 10 displays the fitting image on the interface for the user to watch.
The structure of the electronic device in the embodiment of the present application is described below, and fig. 2 is a schematic structural diagram of the electronic device 500 in the embodiment of the present application, where the electronic device 500 includes at least one processor 510, a memory 550, at least one network interface 520, and a user interface 530. The various components in the electronic device 500 are coupled together by a bus system 540. It is understood that the bus system 540 is used to enable communications among the components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in fig. 2.
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc., wherein the general purpose Processor may be a microprocessor or any conventional Processor, etc.
The user interface 530 includes one or more output devices 531 enabling presentation of media content, including one or more speakers and/or one or more visual display screens. The user interface 530 also includes one or more input devices 532, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may comprise volatile memory or nonvolatile memory, and may also comprise both volatile and nonvolatile memory. The non-volatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
In some embodiments, memory 550 can store data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating with other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including Bluetooth, wireless Fidelity (WiFi), and Universal Serial Bus (USB), among others;
a display module 553 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
an input processing module 554 to detect one or more user inputs or interactions from one of the one or more input devices 532 and to translate the detected inputs or interactions.
As can be understood from the foregoing, the clothes deformation method and the virtual fitting method provided in the embodiments of the present application may be implemented by various types of electronic devices with computing processing capability, such as a smart terminal and a server.
The clothes deformation method provided by the embodiment of the application is described below by combining with the exemplary application and implementation of the server provided by the embodiment of the application. Referring to fig. 3, fig. 3 is a schematic flow chart of a garment deformation method provided in an embodiment of the present application.
Referring to fig. 3 again, the method S100 may specifically include the following steps:
s10: and acquiring a clothes image and a human body image.
The clothing image includes clothing, for example, the clothing image 1# includes a green short sleeve, the clothing image 2# includes a gray suit coat, and the like. It will be appreciated that the clothing image may be selected by the user on a terminal (e.g. a smartphone) and sent to the server. For example, if the user selects a favorite garment a from shopping software on a smartphone and selects a garment deformation function, the smartphone transmits a garment image including the garment a to a server.
The human body image includes a human body torso, and for example, the human body image may be a full-body photograph or a half-body photograph of the user. In some embodiments, the user may photograph a full-body photograph or a half-body photograph of himself through the terminal as the human body image. The terminal sends the human body image to the server.
S20: and performing region segmentation on the clothes trying to wear in the clothes image to obtain at least two clothes regions.
Referring to fig. 4, "performing region segmentation on the try-on garment in the garment image" means dividing the try-on garment into regions to obtain at least two garment regions. It can be understood that the clothes are cut into a plurality of cut-parts by the cloth according to the human body structure, and the cut-parts are sewn together to form the clothes, so that the clothes can be worn by a human body and adapt to the human body structure. Therefore, the clothes can be divided into a plurality of clothes areas according to the cutting and sewing structure of the clothes.
In some embodiments, the step S20 specifically includes:
s21: and analyzing the region types of the clothes images by adopting an analysis algorithm to obtain at least two clothes regions. Wherein the at least two garment regions include a left sleeve region, a left shoulder region, a right sleeve region, a right shoulder region, a back region, or a chest region.
Here, the image of the clothing may be analyzed for the pixel type by using an existing analysis algorithm, for example, graphonomy algorithm, to obtain an analysis image. In the analysis image, the pixels are classified into 7 categories, and the categories can be identified by 0 to 6, for example, 0 represents the background, 1 represents the left sleeve, 2 represents the left shoulder, 3 represents the right sleeve, 4 represents the right shoulder, 5 represents the back, and 6 represents the chest. It is understood that the pixels of type 1 constitute the left sleeve region, the pixels of type 2 constitute the left shoulder region, the pixels of type 3 constitute the right sleeve region, the pixels of type 4 constitute the right shoulder region, the pixels of type 5 constitute the back region, and the pixels of type 5 constitute the anterior chest region.
The left sleeve area, the left shoulder area, the right sleeve area, the right shoulder area, the back area or the chest area are divided, so that the clothes sewing structure is met, the skeleton structure of a human body is also met, and the coordination and reasonability of deformation are facilitated.
In some embodiments, each garment region employs region edge coordinate information
Figure BDA0003744497410000111
For example, the left sleeve region is represented by 4 pieces of coordinate information of the region edge, i.e., the 4 pieces of coordinate information represent the position and shape of the left sleeve region.
In the embodiment, the clothes image is divided into the left sleeve area, the left shoulder area, the right sleeve area, the right shoulder area, the back area or the chest area by adopting an analytic algorithm, so that the clothes sewing structure is met, the skeleton structure of a human body is also met, and the coordination rationality of deformation is facilitated.
S30: and detecting key points of the human body to obtain a plurality of key points.
The human body key point detection algorithm is adopted to detect the human body key points of the model image, so that the human body key point information (namely a plurality of key points on the human body) can be positioned, and as shown in fig. 5, the key points can be coordinate points of the nose, the left eye, the right eye, the left ear, the right ear, the left shoulder, the right elbow, the left wrist, the right hip, the left knee, the right knee and the left ankle. In some embodiments, the human key point detection algorithm may employ openposition algorithm for detection. In some embodiments, the human keypoint detection algorithm may employ a 2D keypoint detection algorithm, such as a probabilistic point Machine (CPM) or a Stacked Hourglass Network (Hourglass), among others.
It will be appreciated that each keypoint has its own serial number, and the location of each keypoint is represented by coordinates. For example, referring to fig. 5 again, the openposition algorithm originally defines 18 key points, the serial numbers of the key points represent the joints of the human body, and the serial numbers include: 0 (nose), 1 (neck), 2 (right shoulder), 3 (right elbow), 4 (right wrist), 5 (left shoulder), 6 (left elbow), 7 (left wrist), 8 (right hip), 9 (right knee), 10 (right ankle), 11 (left hip), 12 (left knee), 13 (left ankle), 14 (right eye), 15 (left eye), 16 (right ear), 17 (left ear). For different human bodies, the openposition algorithm detects the 18 key points, and the coordinates of the 18 key points of each human body are different based on different human body types.
Therefore, the human body image is subjected to human body key point detection to obtain a plurality of key points. Each of the plurality of key points includes a serial number and a corresponding coordinate, the serial number representing a human joint.
In some embodiments, the step S30 specifically includes:
s31: and detecting key points of the human body by adopting a preset dense key point detection model to obtain a plurality of key points, wherein the plurality of key points comprise key points on the outline of the trunk and key points on the central line of the trunk.
The preset dense key point detection model can detect more key points, and is trained in advance, for example, optimization of an openpos algorithm can be performed. For example, 18 key points originally defined by the openuse algorithm are expanded into 38 key points, please refer to fig. 6, and fig. 6 shows the sequence numbers and positions of the 38 key points.
In this embodiment, the plurality of key points include key points on the outer contour of the trunk and key points on the centerline of the trunk, so that the trunk of the human body can be divided more finely to match with the clothes region, and the deformation of the small-area clothes region is more accurate.
S40: and finding out a plurality of target key point serial numbers corresponding to the clothes style of the try-on clothes and the target clothes area from a preset matching rule base.
The preset matching rule base comprises corresponding relations among the clothes styles, the clothes areas and the key point serial numbers. It can be understood that the preset matching rule base comprises a plurality of records, and each record comprises a corresponding relationship among the clothes style, the clothes area and the key point serial number. For example, one of the records is a standard short sleeve (clothes style), a left shoulder area, and key point serial numbers 26, 27, 28, 30, and 24 (see fig. 6 for key points), which represent that the left shoulder area should be located in the area where the key points 26, 27, 28, 30, and 24 of the trial wearer are located when the standard short sleeve is tried on.
It is understood that the key point serial numbers corresponding to the same clothing region may be different for different styles of clothing, for example, the standard short sleeve (clothing style), the chest region, the key point serial numbers 5, 24, 30, 19, 18, 17, 16 (see fig. 6 for key points); navel-leaking short sleeve (clothes style), the chest area, key point serial numbers 5, 24, 30, 20, 22, 15. For another example, standard shirt (style of clothing), right sleeve area, key point serial numbers 3, 5, 12, 13; shoulder-drop blouse (style of clothes), right sleeve area, key point serial numbers 4, 5, 12, 13.
The preset matching rule base can be constructed in advance, and technicians in the field can analyze and summarize clothes of various styles and fitting effects to summarize the distribution condition of human body trunks corresponding to all clothes areas when the clothes of different styles are worn and answered, so that the preset matching rule base is constructed.
Under the guidance of the preset matching rule base, a plurality of target key point serial numbers corresponding to the clothes style of the try-on clothes and the target clothes area can be found out from the preset matching rule base. Here, the target garment region is any one of at least two garment regions, and for convenience of expression, the target garment region is taken to represent any one of the garment regions of the try-on garment.
The target key point sequence number is the garment style and the key point sequence number corresponding to the target garment area. For example, if the style of the clothes is "shoulder-drop blouse" and the target clothes area is "right sleeve area", the corresponding target key point sequence numbers "4, 5, 12, 13" can be found in the preset matching rule base.
S50: and determining a plurality of corresponding target key points from the plurality of key points according to the sequence numbers of the plurality of target key points, and deforming the target clothes area according to the outlines indicated by the plurality of target key points to obtain a target deformed area.
After the serial numbers of the target key points are obtained, the corresponding target key points can be determined from a plurality of key points of the human body image, namely the coordinate positions of the serial numbers of the target key points in the human body image are obtained.
It can be understood that, based on the preset matching rule base reflecting the distribution of the human trunk corresponding to each clothing region when different styles of clothing are worn and answered, it can be known that, when the try-on clothing is worn, the target clothing region should be located in the region defined by the target key points. And then, deforming the target clothes area according to the contour indicated by the plurality of target key points to obtain a target deformed area. Therefore, the target deformation region can be attached to the trunk part where the matched target key point is located.
Referring to fig. 7, fig. 7 is a schematic sectional view of a shirt with shoulder drop pattern. The "shoulder drop shirt" is divided into a left sleeve region, a left shoulder region, a right sleeve region, a right shoulder region, a back region, and a chest region. If the target clothing region is a left sleeve region, the corresponding target key points are 29, 37, 38 and 30 (see fig. 6), so that the left sleeve region is deformed according to the contour indicated by the target key points 29, 37, 38 and 30, and a left sleeve deformation region is obtained. That is, the contour of the left sleeve deformed region resulting from the deformation of the left sleeve region approximately coincides with the contour indicated by the target keypoints 29, 37, 38, 30. If the target clothes area is a chest region, the corresponding target key points are 5, 30, 19 and 16, so that the chest region is deformed according to the contour indicated by the target key points 5, 30, 19 and 16, and a chest deformation area is obtained. Wherein the anterior chest deformation region approximately coincides with the contour indicated by the target key points 5, 30, 19, 16. It is understood that "approximately uniform" herein means that the two profiles are the same or substantially the same (the deviation of the two profiles is within a certain range).
It is understood that the examples herein are merely illustrative of "deforming the target clothing region by the contour indicated by the plurality of target key points to obtain the target deformed region", and do not pose any limitation. Other garment styles and other garment regions are not illustrated.
In some embodiments, referring to fig. 8, the step of "deforming the target clothing region according to the contour indicated by the target key points to obtain the target deformed region" specifically includes:
s51: and iteratively fitting a transformation matrix according to the edge coordinates of the target clothes area and the plurality of target key points.
S52: and carrying out affine change on the target clothes area by adopting the transformation matrix to obtain a target deformation area.
After the edge coordinates are affine-changed according to the transformation matrix, the changed edge coordinates are close to the corresponding key points. That is, the contour of the target deformed region obtained by performing the radial transformation of the target clothing region according to the transformation matrix is substantially the same as the contour defined by the plurality of target key points.
In this embodiment, the transformation matrix may be iteratively fitted using the API interface of OPENCV according to Levenberg-Marquarelt (LM) and least squares.
It can be understood that each clothes pixel in the target clothes area is affine changed according to the transformation matrix, and each changed clothes pixel forms the target deformation area.
In some embodiments, the clothing pixels are affine varied using the following formula:
Figure BDA0003744497410000141
wherein x is i And y i Is the coordinate, x 'corresponding to the original clothes pixel' i Are the transformed coordinates. And H is the transformation matrix obtained by calculation.
In this embodiment, the target clothing region is subjected to radiation change by using the fitted transformation matrix, so that the obtained target deformation region can fit and match the trunk part where the target key point is located.
S60: and after the deformation of at least two clothes areas is completed, combining the obtained at least two deformation areas to obtain a clothes deformation graph.
For at least two garment regions corresponding to the garment image, each garment region is deformed in the manner shown in step S50, thereby obtaining at least two deformed regions. The clothes deformation graph can be obtained by combining at least two deformation areas based on that each clothes area is obtained by dividing the whole fitting clothes. It is understood that "combining" herein is splicing or piecing together along the parting line.
In this embodiment, the preset matching rule base includes the corresponding relationship among the clothes style, the clothes area and the key point sequence number, wherein the key point sequence number reflects the human trunk area correspondingly attached to the clothes area, so that the corresponding attachment matching relationship between the human trunk area and the clothes area reflected by the preset matching rule base is utilized to split the try-on clothes into at least two clothes areas, the clothes areas are deformed according to the corresponding key points in the human body image, the deformation of each clothes area conforms to the corresponding human trunk area, and the human trunk in the human body image can be attached and matched. Therefore, the clothes deformation graph obtained by combining the at least two deformation areas can adapt to the body state of the human body and adapt to the trunk of the human body, and the clothes fitting effect is improved.
And each clothes area of the clothes image is correspondingly deformed, so that the deformed clothes texture trend is reasonable. As used herein, "garment texture" refers to the natural folds that form when a garment is worn on a person. It will be appreciated that the cloth material of the garment is flexible and conforms to the body of a person to form natural corrugated folds when placed on the body of a person. Referring to fig. 9, the tiled clothes have no wrinkles, that is, the trying-on clothes in the original clothes image have no wrinkles following the shape of the human body, and after the trying-on clothes are deformed to simulate the trying-on effect, natural wrinkles are generated due to the change of the whole pixel position of the clothes. In this embodiment, each region of the garment is deformed accordingly, and natural wrinkles are generated in the process of changing the position of the whole pixels of the garment. Compared with the situation that texture deformation is unreasonable due to local deformation based on optical flow, the deformation mode (combination of deformed clothes areas) of the embodiment can enable the texture of the deformed clothes to be reasonable and natural.
In addition, the embodiment does not need to train a model, so that on one hand, the dependence on the sample data volume is reduced, and the condition that the effect of the clothes deformation model is poor due to uncertain factors existing in the model training process is effectively avoided; on the other hand, the preset matching rule base constructed based on expert knowledge summary is combined after regional deformation, the deformation of each clothes region can take care of the local details of the trunk, and compared with the one-time deformation of the whole clothes, the clothes deformation more adaptive to the body state of the human body can be realized.
In some embodiments, referring to fig. 10, the method S100 further includes:
s70: and carrying out human body analysis on the human body image, and determining whether a clothes shielding area exists in the obtained human body analysis image.
S80: and if the clothes shielding area exists, hiding clothes pixels located in the clothes shielding area in the clothes deformation graph.
Here, the human body image may be analyzed by a conventional analysis algorithm, for example, graphonomy algorithm, to obtain a human body analysis map. In the human body analysis diagram, the pixel points are classified into 20 types, and the categories may be identified by 0 to 19, for example, 0 represents background, 1 represents hat, 2 represents hair, 3 represents glove, 4 represents sunglasses, 5 represents jacket, 6 represents one-piece dress, 7 represents jacket, 8 represents sock, 9 represents pants, 10 represents trunk skin, 11 represents scarf, 12 represents half-skirt, 13 represents face, 14 represents left arm, 15 represents right arm, 16 represents left leg, 17 represents right leg, 18 represents left shoe, and 19 represents right shoe. From the human body analysis map, the category to which each region in the human body image belongs can be determined.
If other pixels such as hairs and arms exist in the clothes area, the clothes are shielded, and the clothes shielding area exists. Therefore, whether a clothing blocking area exists can be determined from the human body analysis map.
And if the clothes shielding area exists, hiding clothes pixels located in the clothes shielding area in the clothes deformation graph. Here, "hide" means to clear away the clothes pixels located in the clothes-blocked area, so as to prepare for the subsequent generation of a realistic fitting image, and the original pixels of the blocked area, such as hair pixels, arm pixels, etc., can be displayed in the fitting image, so that the fitting effect is more realistic.
In some embodiments, the step S80 specifically includes:
s81: and obtaining a mask image of the clothes shielding area from the human body analytic graph.
Wherein the mask image comprises values 0 and 1, wherein 1 indicates occluded and 0 indicates not occluded. Thus, the region formed by the numerical value 1 is the clothing shielding region. Specifically, based on the human body analysis map, the pixel of the blocking object in the clothes area may be set to 1, and the remaining pixels may be set to 0.
S82: calculating to obtain a final clothes deformation result by adopting the following formula;
P=W*(1-M)
wherein, P is the final clothes deformation result, W is the clothes deformation graph, and M is the mask image.
In this embodiment, a mask image is obtained according to the human body analysis map, and the mask image is used to perform mask processing on the whole clothes deformation map, so that clothes pixels located in a clothes shielding area in the clothes deformation map can be accurately hidden.
In some embodiments, referring to fig. 11, the method S100 further includes:
s101: acquiring a plurality of image groups, wherein the image groups comprise a clothes sample image and a model image, the model in the model image is worn with clothes in the clothes sample image, and the clothes sample image is marked with clothes style.
Here, the plurality of image groups are empirical data for summarizing a preset matching rule base. It will be appreciated that the image set can cover a variety of garment styles. The model image can show the real fitting effect of the clothes in the corresponding clothes sample image. For example, if the sample image of the garment includes loose short sleeves, the corresponding model image may have the loose short sleeves worn on the model. The sample image of the garment is labeled "loose short sleeves" (garment style). In some embodiments, the marked garment style may be indicated by a number, such as 0 for a loose short sleeve.
S102: and performing region segmentation on the clothes sample image to obtain at least two clothes regions.
Here, the clothing sample image may be subjected to region segmentation with reference to the segmentation manner in step S20 to obtain at least two clothing regions. The detailed description of the division is omitted here. In some embodiments, the garment region corresponding to the garment sample image may also include a left sleeve region, a left shoulder region, a right sleeve region, a right shoulder region, a back region, or a chest region.
S103: and detecting key points of the human body of the model image to obtain a plurality of key points.
Here, the human body key point detection may be performed on the model image with reference to step S30 to obtain a plurality of key points. The detailed description of the detection method is omitted here. In some embodiments, the plurality of keypoints comprises keypoints on the outer contour of the torso and keypoints on the midline of the torso.
S104: and according to the matching relation between the clothes structure and the human body trunk, matching each clothes area corresponding to the clothes sample image with the key point corresponding to the model image respectively to obtain the corresponding relation between the clothes style, the clothes area and the key point sequence number.
For example, the garment structure includes left sleeves, left shoulders, right sleeves, right shoulders, back, and chest, and the torso includes left and right shoulders, left and right elbows, left and right wrists, left and right hips, left and right knees, and left and right ankles, such that there is a fitting relationship between the garment structure and the torso, e.g., left sleeve fits the left elbow, garment left shoulder fits the left shoulder of the torso, and so forth. Therefore, each clothes area corresponding to the clothes sample image can be matched with the key point corresponding to the model image, and the corresponding relation among the clothes style, the clothes area and the key point sequence number is obtained. For example, standard shirt (style of clothing), right sleeve area, key point serial numbers 3, 5, 12, 13.
For different clothes styles, the matching relationship can be properly adjusted according to the style characteristics, such as shoulder-drop loose-fitting shirt (clothes style), right sleeve area, key point serial numbers 4, 5, 12 and 13.
S105: and after the matching of the plurality of image groups is completed, obtaining a preset matching rule base.
It can be understood that after the matching is completed for each image group in the plurality of image groups, that is, after the corresponding relationships among the clothes style, the clothes area and the key point serial number are obtained, the corresponding relationships of the plurality of image groups form the preset matching rule base.
The preset matching rule base can be used as an expert knowledge base for guiding the clothes deformation of most of clothes on the market.
In the embodiment, the image groups of different styles are collected, and the preset matching rule base is constructed by utilizing expert knowledge, so that the clothes deformation is guided, and compared with model training, the method not only reduces the dependence on sample data amount, but also can realize the clothes deformation more adaptive to the body posture of a human body.
To sum up, the clothes deformation method provided by the embodiment of the application includes the corresponding relation among the clothes style, the clothes area and the key point serial number based on the preset matching rule base, wherein the key point serial number reflects the human body trunk area correspondingly attached to the clothes area, so that the corresponding attaching matching relation between the human body trunk area and the clothes area reflected by the preset matching rule base is utilized to split the try-on clothes into at least two clothes areas, the clothes are deformed according to the corresponding key points in the human body image respectively, the deformation of each clothes area conforms to the corresponding human body trunk area, and the human body trunk in the human body image can be attached and matched. Therefore, the clothes deformation graph obtained by combining the at least two deformation areas can adapt to the body state of the human body and adapt to the trunk of the human body, and the clothes fitting effect is improved. And each clothes area of the clothes image is correspondingly deformed, so that the deformed clothes have reasonable grain trend. In addition, the method does not need to train a model, on one hand, the dependence on the sample data volume is reduced, and the condition that the effect of the clothes deformation model is poor due to uncertain factors existing in the model training process is effectively avoided; on the other hand, the preset matching rule base constructed based on expert knowledge summary is combined after regional deformation, the deformation of each clothes region can take care of the local details of the trunk, and compared with the one-time deformation of the whole clothes, the clothes deformation more adaptive to the body state of the human body can be realized.
After the clothes image is deformed by the clothes deformation method provided by the embodiment of the application to obtain the clothes deformation image, in some embodiments, the clothes deformation image can be applied to the detailed introduction of the clothes to display the clothes in a three-dimensional manner for customers to know; in some embodiments, the garment deformation image may be applied to a virtual fitting.
The virtual fitting provided by the embodiment of the application can be implemented by various electronic devices with computing processing capacity, such as an intelligent terminal, a server and the like.
The virtual fitting method provided by the embodiment of the present application is described below with reference to exemplary applications and implementations of the terminal provided by the embodiment of the present application. Referring to fig. 12, fig. 12 is a schematic flowchart of a virtual fitting method provided in the embodiment of the present application. The method S200 includes the steps of:
s201: and (3) deforming the clothes image by adopting any one of the clothes deformation method embodiments to obtain a clothes deformation graph.
In this embodiment, the garment image includes a try-on garment and the body image includes a human torso, which may be the torso of a try-on wearer. The clothes deformation diagram obtained by the deformation of any one of the clothes deformation method embodiments can adapt to the body state of a human body and is adaptive to the trunk of the human body, and the clothes texture trend is reasonable.
S202: and fusing the clothes deformation image and the human body image to obtain a fitting image.
In some embodiments, the clothes deformation graph and the human body image are input to generate a confrontation network, and the clothes deformation graph and the human body image are fused to generate a fitting image. The generation of the countermeasure network for image generation is a conventional method known to those skilled in the art and will not be described in detail here.
In the embodiment, the fitting image obtained after fusion not only has the characteristics of the specifically deformed clothes, but also has the characteristics of the human body trunk, and the fitting image can adapt to the body state of the human body and adapt to the human body trunk due to the clothes deformation image, and the clothes texture trend is reasonable, so that the fitting image can embody a real and vivid fitting effect.
Embodiments of the present application further provide a computer-readable storage medium storing computer-executable instructions for causing an electronic device to perform a clothes deformation method provided in an embodiment of the present application, for example, a method of training an analytic model as shown in fig. 3 to 11, or a virtual fitting method provided in an embodiment of the present application, for example, a virtual fitting method as shown in fig. 12.
In some embodiments, the storage medium may be a memory such as FRAM, ROM, PROM, EPROM, EE PROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, may be stored in a portion of a file that holds other programs or data, e.g., in one or more scripts in a HyperText markup Language (H TML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device (a device that includes a smart terminal and a server), or on multiple computing devices located at one site, or distributed across multiple sites and interconnected by a communication network.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program, where the computer program includes program instructions, and the program instructions, when executed by a computer, cause the computer to execute a clothes deformation method or a virtual fitting method as in the foregoing embodiments.
It should be noted that the above-described device embodiments are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; within the context of the present application, where technical features in the above embodiments or in different embodiments can also be combined, the steps can be implemented in any order and there are many other variations of the different aspects of the present application as described above, which are not provided in detail for the sake of brevity; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method of deforming a garment, comprising
Acquiring a clothes image and a human body image;
performing region segmentation on the trying-on clothes in the clothes image to obtain at least two clothes regions;
detecting key points of the human body image to obtain a plurality of key points;
finding out a plurality of target key point serial numbers corresponding to the clothes style of the try-on clothes and the target clothes area from a preset matching rule base, wherein the preset matching rule base comprises the corresponding relation among the clothes style, the clothes area and the key point serial numbers, and the target clothes area is any one of the at least two clothes areas;
determining a plurality of corresponding target key points from the plurality of key points according to the sequence numbers of the plurality of target key points, and deforming the target clothes area according to the outlines indicated by the plurality of target key points to obtain a target deformed area;
and after the at least two clothes areas are deformed, combining the obtained at least two deformed areas to obtain a clothes deformation graph.
2. The method according to claim 1, wherein the region segmenting the garment image into at least two garment regions comprises:
and performing region type analysis on the clothes image by adopting an analysis algorithm to obtain at least two clothes regions, wherein the at least two clothes regions comprise a left sleeve region, a left shoulder region, a right sleeve region, a right shoulder region, a back region or a chest region.
3. The method of claim 2, wherein detecting key points of the human body image to obtain a plurality of key points comprises:
and detecting the key points of the human body image by adopting a preset dense key point detection model to obtain the key points, wherein the key points comprise key points on the outline of the trunk and key points on the central line of the trunk.
4. The method according to claim 1, wherein said deforming the target garment region to the contour indicated by the plurality of target keypoints to obtain a target deformed region comprises:
iteratively fitting a transformation matrix according to the edge coordinates of the target clothes area and the plurality of target key points;
and carrying out affine change on the target clothes area by adopting the transformation matrix to obtain the target deformation area.
5. The method according to any one of claims 1-4, further comprising:
analyzing the human body image, and determining whether a clothes shielding area exists in the obtained human body analysis image;
and if the clothes occlusion area exists, hiding clothes pixels located in the clothes occlusion area in the clothes deformation map.
6. The method according to claim 5, wherein the hiding the clothes pixels in the clothes deformation map located in the clothes occlusion area comprises:
acquiring a mask image of the clothes shielding area from the human body analytic graph;
calculating to obtain a final clothes deformation result by adopting the following formula;
P=W*(1-M)
wherein, P is the final clothes deformation result, W is the clothes deformation graph, and M is the mask image.
7. The method of claim 1, further comprising:
acquiring a plurality of image groups, wherein the image groups comprise a clothes sample image and a model image, a model in the model image is worn with clothes in the clothes sample image, and the clothes sample image is marked with clothes styles;
performing region segmentation on the clothes sample image to obtain at least two clothes regions;
detecting key points of a human body on the model image to obtain a plurality of key points;
according to the matching relation between the clothes structure and the human body trunk, matching each clothes area corresponding to the clothes sample image with the key point corresponding to the model image respectively to obtain the corresponding relation between the clothes style, the clothes area and the key point sequence number;
and obtaining the preset matching rule base after the matching of the plurality of image groups is completed.
8. A virtual fitting method, comprising:
-transforming the garment image by a method according to any of claims 1-7 to obtain a garment transformation map;
and fusing the clothes deformation image and the human body image to obtain a fitting image.
9. An electronic device, comprising:
at least one processor, and
a memory communicatively coupled to the at least one processor, wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
10. A computer-readable storage medium having stored thereon computer-executable instructions for causing a computer device to perform the method of any one of claims 1-8.
CN202210821116.6A 2022-07-13 2022-07-13 Clothes deformation method, virtual fitting method and related device Pending CN115293958A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210821116.6A CN115293958A (en) 2022-07-13 2022-07-13 Clothes deformation method, virtual fitting method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210821116.6A CN115293958A (en) 2022-07-13 2022-07-13 Clothes deformation method, virtual fitting method and related device

Publications (1)

Publication Number Publication Date
CN115293958A true CN115293958A (en) 2022-11-04

Family

ID=83821790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210821116.6A Pending CN115293958A (en) 2022-07-13 2022-07-13 Clothes deformation method, virtual fitting method and related device

Country Status (1)

Country Link
CN (1) CN115293958A (en)

Similar Documents

Publication Publication Date Title
Zakaria et al. Anthropometry, apparel sizing and design
US9928411B2 (en) Image processing apparatus, image processing system, image processing method, and computer program product
KR101775327B1 (en) Method and program for providing virtual fitting service
CN105956912A (en) Method for realizing network fitting
US20220258049A1 (en) System and method for real-time calibration of virtual apparel using stateful neural network inferences and interactive body measurements
EP3479296A1 (en) System, device, and method of virtual dressing utilizing image processing, machine learning, and computer vision
US8674989B1 (en) System and method for rendering photorealistic images of clothing and apparel
WO2013123306A1 (en) System and method for simulating realistic clothing
US20200367590A1 (en) Devices and methods for extracting body measurements from 2d images
US20130170715A1 (en) Garment modeling simulation system and process
TR201815349T4 (en) Improved virtual trial simulation service.
WO2020203656A1 (en) Information processing device, information processing method, and program
CN106210504A (en) Image processing apparatus, image processing system and image processing method
JP6262105B2 (en) Image processing apparatus, image processing system, image processing method, and program
US20150269759A1 (en) Image processing apparatus, image processing system, and image processing method
US20210326955A1 (en) Generation of Improved Clothing Models
US11468651B2 (en) Size measuring system
Gupta New directions in the field of anthropometry, sizing and clothing fit
Lee et al. Heuristic misfit reduction: a programmable approach for 3D garment fit customization
US11386615B2 (en) Creating a custom three-dimensional body shape model
KR101508161B1 (en) Virtual fitting apparatus and method using digital surrogate
CN115293958A (en) Clothes deformation method, virtual fitting method and related device
WO2022081745A1 (en) Real-time rendering of 3d wearable articles on human bodies for camera-supported computing devices
CN109816492B (en) Method, terminal and medium for realizing virtual fitting room
CN116503569B (en) Virtual fitting method and system, computer readable storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination