CN114445271B - Method for generating virtual fitting 3D image - Google Patents

Method for generating virtual fitting 3D image Download PDF

Info

Publication number
CN114445271B
CN114445271B CN202210338163.5A CN202210338163A CN114445271B CN 114445271 B CN114445271 B CN 114445271B CN 202210338163 A CN202210338163 A CN 202210338163A CN 114445271 B CN114445271 B CN 114445271B
Authority
CN
China
Prior art keywords
clothing
virtual fitting
model
video
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210338163.5A
Other languages
Chinese (zh)
Other versions
CN114445271A (en
Inventor
李津
蒋婉棋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Huali Intelligent Technology Co ltd
Original Assignee
Hangzhou Huali Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Huali Intelligent Technology Co ltd filed Critical Hangzhou Huali Intelligent Technology Co ltd
Priority to CN202210338163.5A priority Critical patent/CN114445271B/en
Publication of CN114445271A publication Critical patent/CN114445271A/en
Application granted granted Critical
Publication of CN114445271B publication Critical patent/CN114445271B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

Embodiments disclosed herein provide a method of generating a virtual try-on 3D image. In order to enable the virtual fitting video to simulate the realistic effect of the real person fitting as much as possible, the relevance among the size of the human body model, the posture change of the human body model and the shape change of the clothing model is considered, the relevance among the three in the virtual fitting video is enabled to be as close as possible to the relevance among the three in the real person fitting scene, and the generated virtual fitting video can be more realistic.

Description

Method for generating virtual fitting 3D image
Technical Field
Embodiments of the present disclosure relate to the field of information technology, and in particular, to a method for generating a virtual fitting 3D image.
Background
At present, in some application scenes, the demand of making try-on images of try-on clothing commodities exists.
Such an application scenario may be, for example, live e-commerce. It is increasingly common for e-commerce to market goods to users (as live viewers) over the internet live. When a person who is a main broadcasting of an e-commerce distributes clothing goods (such as clothes, trousers, shoes, hats, ornaments and the like) to a user in a live broadcast, the person usually tries on the clothing goods in person, and shows the effect of the upper body of the clothing goods to the user so as to attract the user to purchase the clothing goods. In the actual live broadcast process, the anchor personnel can hardly meet the fitting requirements proposed by the user in time, so that the fitting images of certain clothing commodities which are made in advance need to be played to the user.
However, the cost of making a try-on image of a real person out of the mirror is relatively high.
Disclosure of Invention
Various embodiments of the present description provide a method of generating a virtual fitting 3D image so that a more realistic virtual fitting 3D image can be generated.
Various embodiments of the present disclosure provide the following technical solutions:
according to a first aspect of various embodiments herein, there is provided a method of generating a virtual try-on 3D image, comprising:
determining a human body model to be tried on and a clothing model of a clothing commodity to be tried on; the human body model and the clothing model belong to a digital 3D model;
acquiring a size parameter and posture parameter sequence of the human body model; the attitude parameter sequence comprises a plurality of continuous attitude parameters of the human body model for carrying out a plurality of continuous attitude changes;
acquiring default shape parameters of the clothing model; the default shape parameter is determined according to the shape of the clothing commodity in the non-fitting state;
inputting the size parameter, the posture parameter sequence and the default shape parameter of the clothing model into a clothing shape prediction model, and outputting a non-default shape parameter sequence of the clothing model; the clothing shape prediction model is constructed based on a recurrent neural network, each non-default shape parameter in the non-default shape parameter sequence corresponds to each attitude parameter in the attitude parameter sequence one by one, and a non-default shape parameter corresponding to an attitude parameter is a shape parameter after the shape of the clothing model is changed, which is caused by the human body model trying on the clothing model under the attitude parameter;
And fusing the human body model under each attitude parameter with the clothing model under the non-default shape parameter corresponding to the attitude parameter to obtain a corresponding frame of virtual fitting 3D image.
According to a second aspect of various embodiments herein, there is provided a method of generating a virtual fitting video, comprising:
based on the method for generating the virtual fitting 3D image in the first aspect, a multi-frame virtual fitting 3D image of a clothing commodity is obtained;
and obtaining a virtual fitting video of the clothing commodity according to the obtained virtual fitting 3D images, wherein the virtual fitting video is a 2D video or a 3D video.
According to a third aspect of the embodiments of the present specification, a video playing method applied to live telecast is provided, wherein a virtual fitting video of one or more clothing commodities is generated in advance based on the method for generating a virtual video in the second aspect; the method comprises the following steps:
responding to the virtual fitting instruction, and determining the clothing commodity to be fitted;
and playing the virtual fitting video of the clothing commodity to the user.
According to a fourth aspect of various embodiments herein, there is provided a computing device comprising a memory, a processor; the memory is for storing computer instructions executable on a processor for implementing the method of the first or second or third aspect when the computer instructions are executed.
According to a fifth aspect of the various embodiments of the present description, a computer-readable storage medium is proposed, on which a computer program is stored which, when being executed by a processor, carries out the method of the first or second or third aspect.
To replace real-person fitting apparel, a virtualized human body model and a virtualized apparel model may be employed to generate a virtual fitting 3D image. In order to enable the presentation effect of the virtual fitting 3D image not to be rigid, but to simulate the natural effect of fitting by a real person as much as possible (so that the effect is more vivid), the relevance between the posture change of the human body model and the shape change of the clothes model caused after fitting the clothes is considered, the relevance between the two in the virtual fitting 3D image is made to be as close as possible to the relevance between the two in the scene of fitting by the real person, and the generated virtual fitting 3D image can be more vivid.
And the relevance of the two in the virtual fitting 3D image is close to the relevance of the two in the real fitting scene as much as possible, and a certain relevance rule needs to be found. Therefore, the technical scheme adopts an artificial intelligence AI technology, and an artificial intelligence model can discover the association rule.
Considering that the posture change of the human body model depends on the size characteristics of the human body model and is dynamic data, the size parameter and the posture parameter sequence of the human body model are adopted to describe the posture change of the human body model; considering that the shape change of the clothing model is changed from the default shape of the clothing model and is dynamic data, the shape change of the clothing model is described by adopting the default shape parameter and non-default shape parameter sequence of the clothing model.
Further, in order to adapt to the processing of the serialized data, a sequence prediction type AI algorithm is required to construct an artificial model, and therefore, a clothing shape prediction model is constructed by using a recurrent neural network, and the size parameters of the human body model, the posture parameter sequence of the human body model and the default shape parameters of the clothing model are input into the clothing shape prediction model. And the output of the clothing shape prediction model is a non-default shape parameter sequence of the clothing model. Therefore, on the premise of giving the human body model, a series of postures to be made by the human body model and the clothing model, the shape of the clothing model under each posture of the human body model can be predicted by using the clothing shape prediction model, so that in a 3D image of the human body model simulating the fitting clothing model, the shape of the clothing model looks natural and has a dynamic flexible effect.
Through the technical scheme, a more vivid virtual fitting 3D image can be obtained.
Drawings
Fig. 1 exemplarily provides a flow of a method of generating a virtual fitting 3D image.
Fig. 2 exemplarily provides a flow of a video playing method applied to live e-commerce.
Fig. 3 is a schematic diagram of a computer-readable storage medium provided by the present disclosure.
Fig. 4 is a schematic structural diagram of a computing device provided by the present disclosure.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts. Any number of elements in the drawings are by way of example and not by way of limitation, and any nomenclature is used solely for differentiation and not by way of limitation.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described in this specification. In some other embodiments, the methods may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
Several concepts of the disclosure are presented herein.
Clothing goods: the apparel goods in the present disclosure may cover various apparel, and may include not only clothes, trousers, shoes, socks, etc., but also jewelry, hair accessories, hanging accessories, accessories (such as handbags), etc.
The human body model comprises: the human body model is a digital 3D model, can be obtained by performing 3D modeling on a real human, and can also be simulated to generate a human body model of a virtual human. The merchant can have the personal human body model, and the user can also define the personalized human body model. The different mannequins may differ in size and/or in the corresponding gender.
Clothing model of clothing commodity: the model is a digital 3D model, and data such as photos, videos and materials of solid clothing commodities can be obtained for 3D modeling. The merchant can save the clothing model of at least part of the clothing commodities of the merchant and the commodity numbers to the database. The merchant can also mark style labels for different clothing commodities, such as sweet, vintage, neutral, leisure and the like. Different style labels are marked with matched fitting and putting-on schemes (such as hairstyle, makeup, other clothing commodities matched with the clothing commodities, and the like) and matched fitting and showing actions (such as waist crossing, circle turning, smiling and jumping). It is easy to understand that one try-on show action can be understood as a sequence of consecutive gestures.
Clothing shape prediction model: the "model" in the clothing shape prediction model is not the same latitude concept as the "model" in the 3D model, as will be understood by those skilled in the art.
Virtual fitting 3D image: the concept of one frame of 3D image is adopted, and it is easy to understand that a plurality of frames of virtual try-on 3D images can form a virtual try-on 3D video. The virtual try-on 2D image is a projection of the virtual try-on 3D image on a plane, and is a result of projecting the virtual try-on 3D image at a certain angle to the plane, different plane projection results (namely the virtual try-on 2D image) can be generated when a user rotates the angle of a combined model (namely the combination of a human body model and a clothing model of clothing goods) in the virtual try-on 3D image, and the projection results are calculated through rendering. The multi-frame virtual fitting 2D images can form a virtual fitting 2D video.
The technical scheme is introduced as follows:
to replace real-person fitting apparel, a virtualized human body model and a virtualized apparel model may be employed to generate a virtual fitting 3D image. In order to ensure that the presenting effect of the virtual fitting 3D image is not rigid, but the natural effect of the real person fitting is simulated as much as possible (the effect is more vivid), the relevance between the posture change of the human body model and the shape change of the clothes model caused by fitting the clothes is considered, the relevance between the two in the virtual fitting 3D image is close to the relevance between the two in the real person fitting scene as much as possible, and the generated virtual fitting 3D image can be more vivid.
And the relevance between the two in the virtual fitting 3D image is close to the relevance between the two in the real person fitting scene as much as possible, and a certain relevance rule needs to be discovered. Therefore, the technical scheme adopts an artificial intelligence AI technology, and an artificial intelligence model can be used for discovering the association rule.
Considering that the posture change of the human body model depends on the size characteristics of the human body model and is dynamic data, the size parameter and the posture parameter sequence of the human body model are adopted to describe the posture change of the human body model; considering that the shape change of the clothing model is changed from the default shape of the clothing model and is dynamic data, the shape change of the clothing model is described by adopting the default shape parameter and non-default shape parameter sequence of the clothing model.
Further, in order to adapt to the processing of the serialized data, a sequence prediction type AI algorithm is required to be adopted to construct an artificial model, so that a clothing shape prediction model is constructed by adopting a recurrent neural network, and the size parameters of the human body model, the posture parameter sequence of the human body model and the default shape parameters of the clothing model are input into the clothing shape prediction model. And the output of the apparel shape prediction model is a non-default shape parameter sequence of the apparel model. Therefore, on the premise of giving a human body model, a series of postures to be done by the human body model and a dress model, the shape of the dress model under each posture of the human body model can be predicted by the dress shape prediction model, so that in a 3D image of the dress model simulated by the human body model, the shape of the dress model looks natural and has a dynamic flexible effect.
Therefore, through the technical scheme, a more vivid virtual fitting 3D image can be obtained. In addition, by means of the technical scheme, only the calculation power is needed to be consumed in the process of training the clothing shape prediction model, under the scene that a large number of virtual fitting videos of clothing commodities need to be generated, the shape prediction model is directly provided for a merchant to use, the merchant can directly generate 3D images in batches by using the clothing shape prediction model, the calculation power consumption is lower, and the efficiency of generating the 3D images is high.
The technical scheme for generating the virtual try-on 3D image can be particularly applied to live television scenes. In the live e-commerce scenario, it is increasingly common for e-commerce to market goods to users (as live viewers) via internet live broadcasts. When a person who is a member of an e-commerce promotes apparel goods (such as clothes, trousers, shoes, hats, ornaments and the like) to a user in a live broadcast, the person usually tries on the apparel goods in person to show the effect of the upper body of the apparel goods to the user so as to attract the user to purchase the apparel goods. In the actual live broadcast process, the anchor personnel hardly meet the fitting requirements put forward by the user in time, so that a fitting 3D image of a certain dress commodity which is manufactured in advance needs to be played for the user.
For example, the anchor introduces the clothing commodity a, but the anchor asks the anchor to try on the clothing commodity B when a user issues a comment, and the anchor either has to interrupt the introduction of the clothing commodity a and try on the clothing commodity B to affect the live broadcast effect, or only temporarily neglects the requirement of the user to affect the live broadcast viewing experience of the user.
For another example, when the anchor program tries on clothes in live broadcasting, the anchor program needs to leave the lens temporarily and spend a certain time trying on clothes, which also affects the live broadcasting viewing experience of the user. Especially, if the anchor needs to try on different clothes continuously, it will take more time to match with the corresponding hairstyle, makeup, etc.
And through above-mentioned technical scheme, can broadcast the virtual fit 3D image of certain dress class commodity to the user at any time to, the virtual fit video that the user watched is high-fidelity, and is more lifelike, can simulate out the dress shape change that the dress would have in the real person's fitting, gives the very natural effect show of user, lets the user obtain to be close to watching the experience of real person's fitting dress, and, the user need not wait for the anchor dress.
In addition, the hairstyle and the makeup of the human body model in the virtual try-on video can be flexibly changed, and compared with the situation that the hairstyle and the makeup of a user need to be adjusted by a director, the virtual try-on video is higher in efficiency.
It should be noted that, the above technical solution for generating a virtual fitting 3D image can be applied not only to live scenes of e-commerce, but also to other scenes in which a fitting effect of clothing goods needs to be displayed. For example, when a user browses an interface of a clothing commodity on an e-commerce platform, the user may click an introduction picture of the clothing commodity, the introduction picture may be one or more frames of virtual try-on 3D images, and the user may rotate an angle of a combination model (i.e., a combination of a human body model and a clothing model of the clothing commodity) in the virtual try-on 3D images to view a try-on effect from multiple angles.
Therefore, the application of the technical solution for generating virtual fitting 3D images to the content of live tv commercial scenes, which is described later, is only one possible implementation manner, and this does not constitute a limitation to the application scene range of the technical solution for generating virtual fitting 3D images, and after understanding this technical solution, those skilled in the art will easily think of applying this technical solution to more scenes that need to show fitting effects of clothing goods, which does not need to pay extra creative labor.
The technical scheme is described in detail in the following with reference to the accompanying drawings.
Fig. 1 exemplarily provides a flow of a method of generating a virtual fitting video, including:
s100: and determining the human body model to be tried on and the clothing model of the clothing commodity to be tried on.
S102: and acquiring a size parameter and posture parameter sequence of the human body model.
S104: and acquiring default shape parameters of the clothing model.
S106: inputting the size parameter, the posture parameter sequence and the default shape parameter of the clothing model into a clothing shape prediction model, and outputting the non-default shape parameter sequence of the clothing model.
S108: and fusing the human body model under each attitude parameter with the clothing model under the non-default shape parameter corresponding to the attitude parameter to obtain a corresponding frame of virtual fitting 3D image.
The method shown in fig. 1 may be implemented by a merchant of apparel goods.
Before the method shown in fig. 1 is implemented, a clothing shape prediction model constructed based on a recurrent neural network RNN may be trained. The training party of the model can be a merchant or a technical service party. In the embodiment of the technical service party training model, the technical service party can provide the trained model for the merchant to use.
As mentioned above, the set of input data of the model may be size parameters (three-dimensional dimensions, such as length, width, height, respectively corresponding to fat-thin, width, height, and may further include head-body ratio, head-shoulder ratio, etc.) of the human body model, a sequence of posture parameters of the human body model, and default shape parameters of the apparel model of the apparel product.
The sequence of pose parameters of the mannequin includes a plurality of consecutive pose parameters for the mannequin to make a plurality of consecutive pose changes. The default shape parameter of the clothing commodity is determined according to the shape of the clothing commodity in the non-fitting state.
Various technical means are readily conceivable for the person skilled in the art to implement the definition of the above-mentioned pose parameters and the above-mentioned shape parameters. Examples are provided herein. For example, the posture parameter may be an inclination angle of each joint of the human body model relative to a horizontal plane, and the shape parameter may be a three-dimensional coordinate of a specific shape of the clothing commodity in a point cloud coordinate system (the coordinate system may be established by using Meshlab software).
The set of output data of the model can be a non-default shape parameter sequence of the clothing commodity, the clothing shape prediction model is constructed based on a recurrent neural network, each non-default shape parameter in the non-default shape parameter sequence corresponds to each posture parameter in the posture parameter sequence in a one-to-one mode, and a non-default shape parameter corresponding to one posture parameter is a shape parameter which is caused by the fact that the human body model under the posture parameters tries on the clothing model and has a changed shape.
It will be readily appreciated that the model is a set of input data describing a virtual model for fitting an article of apparel. And the output data of the model is used for describing the shape of the clothing commodity after the clothing commodity is deformed by trying on the virtual model.
In the model training phase, training labels of the model need to be specified, and the training labels are usually specified by a training party. The training method can utilize a physical computation engine to calculate the shape parameters of the clothing model in each posture of the human body model on the premise of giving a series of postures and clothing models to be done by the human body model, and a real non-default shape parameter sequence is formed and used as a training label.
In addition, the training party can also calculate the shape parameters of the commodity type clothes under each posture of the real person in a mode of trying on the clothes type goods by the real person to form a real non-default shape parameter sequence.
In the model training process, the non-default shape parameter sequence predicted by the model gradually approaches to the real non-default shape parameter sequence through one iteration training, and therefore the model training is completed.
In the stage of training the model, the model may be trained based on a plurality of different human body models, and/or be based on a plurality of different apparel articles. That is, different sets of model input data may correspond to different human body models, or to different apparel goods.
In the model application stage, a human body model to be tried on and a clothing model of a clothing commodity to be tried on are determined, the size parameter and the posture parameter sequence of the human body model and the default shape parameter of the clothing model are used as model input data, a clothing shape prediction model is input, and the predicted non-default shape parameter sequence of the clothing model is output.
Then, based on the non-default shape parameter sequence of the clothing model, the human body model under each posture parameter is fused with the clothing model under the non-default shape parameter corresponding to the posture parameter, and a corresponding frame of virtual fitting 3D image is obtained.
In some embodiments, the generated virtual fitting 3D image may be accompanied by other apparel goods in addition to fitting the apparel goods. For example, the current clothing product as the target of fitting is a red coat, and other clothing products to be matched may include a plurality of pink trousers, a pair of blue leather shoes, and a pair of black frame glasses. The human body model needs to try on a red coat, pink trousers, blue leather shoes and black-frame glasses at the same time, and a virtual try-on video corresponding to the red coat. It is readily understood that the virtual fitting 3D images of different apparel goods may be the same.
Thus, a further apparel model for at least one further apparel item to be fitted may be determined; acquiring default shape parameters of each other clothing model; for each other clothing model, inputting the size parameter and the posture parameter sequence of the human body model and the default shape parameter of the other clothing model into a clothing shape prediction model, and outputting the non-default shape parameter sequence of the other clothing model; and fusing the human body model under each attitude parameter with the clothing model under the non-default shape parameter corresponding to the attitude parameter and each other clothing model to obtain a corresponding frame of virtual fitting 3D image.
In some embodiments, the human body model under each pose parameter may be fused with the clothing model under the non-default shape parameter corresponding to the pose parameter, and simulated natural light reflections presented by the fused human body model and the clothing model are rendered to obtain a corresponding frame of virtual fitting 3D image.
Therefore, the natural deformation after the dress fitting can be presented in the frame of virtual dress fitting 3D image, and the shadow effect caused by the natural deformation can be presented.
In addition, after obtaining the multi-frame virtual fitting 3D image of the clothing commodity based on the method flow shown in fig. 1, a virtual fitting video of the clothing commodity may be obtained according to each obtained virtual fitting 3D image, where the virtual fitting video is a 2D video or a 3D video.
Fig. 2 exemplarily provides a flow of a video playing method applied to e-commerce live broadcast, including:
s200: and responding to the virtual fitting instruction, and determining the clothing commodity to be fitted.
S202: and playing the virtual fitting video of the clothing commodity to the user.
In the live e-commerce scenario, the method flow shown in fig. 2 may be implemented by a live room system. Virtual fitting video(s) of one or more apparel goods may be generated based on the method of generating virtual video described above. The merchant can generate the virtual fitting videos in advance before the live broadcast starts, and can also generate some virtual fitting videos in real time according to requirements and play the videos after the live broadcast starts.
The method can respond to a virtual fitting instruction sent by the E-commerce side system to determine the clothing commodity to be fitted. And the clothing commodity to be tried on can be determined in response to the virtual fitting instruction sent by the user side client.
That is, the host or any user watching the live broadcast can initiate the playing of the virtual fitting video.
The human body model to be tried on can be determined in response to a virtual fitting instruction sent by the user side client or the E-commerce side system. That is, when the user initiates playing of the virtual video, the user may specify the virtual model, and the e-commerce may also specify the virtual model. Generally, the virtual fitting video generated by the e-commerce aiming at a certain dress type commodity can have different mannequin versions. For example, different virtual fitting videos may be generated by using human models of different genders, or different virtual fitting videos may be generated by using human models of different sizes.
Further, a user-side client of a certain user may pre-configure a personalized human body model, for example, the personalized human body model may be the user's own human body model. In this way, the user can view the virtual fitting video of the human body model fitting certain clothing goods.
In addition, if there are multiple user-side clients (in practical applications, many users often watch live broadcasts together), the virtual fitting video of the apparel goods may be played only to the user-side clients in the case of determining the apparel goods to be fitted in response to the virtual fitting instruction sent by the user-side clients.
In some embodiments, when the virtual fitting video of the clothing commodity is a 2D video, a first live video stream to be currently played to a user may be obtained, and the virtual fitting video of the clothing commodity is fused in the first live video stream to obtain a second live video stream; and playing the second live video stream to the user.
Further, the picture of the second live video stream includes a first picture area and a second picture area, the first picture area includes a picture of the first live video stream, and the second picture area includes a picture of the virtual try-on video corresponding to the clothing commodity. In practical application, when a user watches a live video stream, one side of a picture can be seen to be a live person anchor for commodity introduction, and the other side of the picture is a virtual model for trying on a certain clothing commodity.
In some embodiments, the virtual fitting 2D video may not be merged into the live video stream, but a sub-interface is popped up in the live viewing interface in response to an operation instruction of the user, and the virtual fitting video of a certain clothing commodity selected by the user is played in the sub-interface.
In some embodiments, if the virtual fitting video is a 3D video, it may be implemented that a 3D virtual anchor is rendered on site in a live broadcast room, and the 3D virtual anchor may show fitting effects of a plurality of clothing goods on site. The user can view the 3D virtual anchor appearing on site through the live video stream.
In practical application, if a user starts to watch live broadcast, but the anchor broadcast does not start to work, or the anchor broadcast temporarily leaves in the live broadcast process, a virtual fitting video of some selected clothing commodities can be played to the user, so that the user can be effectively attracted to continuously watch the live broadcast.
In practical applications, the merchant may mark specific style labels for the apparel goods, such as sweet, vintage, neutral, leisure, etc. Different styles mean different wearing schemes and/or different fitting display actions (which can be understood as a sequence of display gestures), such as circling, crossing, smiling, jumping. Besides the hair style and the makeup, the putting-on scheme can also relate to at least one other clothing commodity besides the target clothing commodity.
In some embodiments, the step of generating a virtual fitting video of one or more apparel goods may comprise: aiming at one or more clothing commodities, at least one putting-on scheme corresponding to the clothing commodity is determined; aiming at each putting-on scheme, determining other clothing commodities except the clothing commodity in the putting-on scheme; and generating a virtual fitting video of the clothing commodity corresponding to the fitting scheme.
Further, a fitting scheme specified by the virtual fitting instruction may be determined in response to the virtual fitting instruction. Therefore, the virtual fitting video corresponding to the fitting scheme of the clothing commodity can be played to the user.
In other embodiments, the step of generating a virtual fitting video of one or more apparel goods may comprise: aiming at one or more clothing commodities, determining a display posture corresponding to the clothing commodity; determining a posture parameter sequence of the human body model according to the display posture corresponding to the clothing commodity; and generating a virtual fitting video of the clothing commodity.
The above embodiments in which different fitting schemes are considered may be combined with the above embodiments in which different fitting operations are considered.
The present disclosure also provides a computer readable storage medium, as shown in fig. 3, on which medium 140 a computer program is stored, which when executed by a processor implements the method of the embodiments of the present disclosure.
The present disclosure also provides a computing device comprising a memory, a processor; the memory is used to store computer instructions executable on the processor for implementing the methods of the embodiments of the present disclosure when the computer instructions are executed.
Fig. 4 is a schematic structural diagram of a computing device provided by the present disclosure, where the computing device 15 may include, but is not limited to: a processor 151, a memory 152, and a bus 153 that connects the various system components, including the memory 152 and the processor 151.
Wherein the memory 152 stores computer instructions executable by the processor 151 to enable the processor 151 to perform the methods of any of the embodiments of the present disclosure. The memory 152 may include a random access memory unit RAM1521, a cache memory unit 1522, and/or a read only memory unit ROM 1523. The memory 152 may further include: a program tool 1525 having a set of program modules 1524, the program modules 1524 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, one or more combinations of which may comprise an implementation of a network environment.
The bus 153 may include, for example, a data bus, an address bus, a control bus, and the like. The computing device 15 may also communicate with external devices 155 through the I/O interface 154, the external devices 155 may be, for example, a keyboard, a bluetooth device, etc. The computing device 150 may also communicate with one or more networks, which may be, for example, local area networks, wide area networks, public networks, etc., through the network adapter 156. The network adapter 156 may also communicate with other modules of the computing device 15 via the bus 153, as shown.
Further, while the operations of the disclosed methods are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
While the spirit and principles of the present disclosure have been described with reference to several particular embodiments, it is to be understood that the present disclosure is not limited to the particular embodiments disclosed, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit. The disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the various elements may be implemented in the same one or more software and/or hardware implementations of the present description.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage media or other magnetic storage devices, or any other non-transmission medium, that may be used to store information that may be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The foregoing describes several embodiments of the present specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The terminology used in the description of the various embodiments is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments herein. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in various embodiments of the present description to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the various embodiments herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, the method embodiments are substantially similar to the method embodiments, so that the description is simple, and reference may be made to the partial description of the method embodiments for relevant points. The above-described method embodiments are merely illustrative, and the modules described as separate components may or may not be physically separate, and the functions of the modules may be implemented in one or more software and/or hardware when implementing the embodiments of the present disclosure. And part or all of the modules can be selected according to actual needs to realize the purpose of the scheme of the embodiment. One of ordinary skill in the art can understand and implement without inventive effort.
The above description is only for the purpose of illustrating the preferred embodiments of the present disclosure, and is not intended to limit the present disclosure to the embodiments, and any modifications, equivalents, improvements and the like made within the spirit and principle of the embodiments should be included in the scope of the present disclosure.

Claims (19)

1. A method of generating a virtual try-on 3D image, comprising:
determining a human body model to be tried on and a clothing model of a clothing commodity to be tried on; the human body model and the clothing model belong to a digital 3D model;
acquiring a size parameter and a posture parameter sequence of the human body model; the attitude parameter sequence comprises a plurality of continuous attitude parameters of the human body model for carrying out a plurality of continuous attitude changes;
acquiring default shape parameters of the clothing model; the default shape parameter is determined according to the shape of the clothing commodity in the non-fitting state;
inputting the size parameter, the posture parameter sequence and the default shape parameter of the clothing model into a clothing shape prediction model, and outputting a non-default shape parameter sequence of the clothing model; the clothing shape prediction model is constructed on the basis of a recurrent neural network, each non-default shape parameter in the non-default shape parameter sequence corresponds to each attitude parameter in the attitude parameter sequence one by one, and a non-default shape parameter corresponding to one attitude parameter is a shape parameter after the shape of the clothing model is changed, which is caused by trying on the clothing model by the human body model under the attitude parameter;
And fusing the human body model under each attitude parameter with the clothing model under the non-default shape parameter corresponding to the attitude parameter to obtain a corresponding frame of virtual fitting 3D image.
2. The method of claim 1, wherein the apparel shape prediction model is trained based on a plurality of different human models and/or apparel models based on a plurality of different apparel goods.
3. The method of claim 2, wherein different phantoms are different sizes and/or have different corresponding genders.
4. The method of claim 1, further comprising:
determining other clothing models of at least one other clothing commodity to be tried on;
acquiring default shape parameters of each other clothing model;
for each other clothing model, inputting the size parameter and the posture parameter sequence of the human body model and the default shape parameter of the other clothing model into a clothing shape prediction model, and outputting the non-default shape parameter sequence of the other clothing model;
fusing the human body model under each attitude parameter with the clothing model under the non-default shape parameter corresponding to the attitude parameter, including:
And fusing the human body model under each attitude parameter with the clothing model under the non-default shape parameter corresponding to the attitude parameter and each other clothing model to obtain a corresponding frame of virtual fitting 3D image.
5. The method of any of claims 1-4, fusing the mannequin under each pose parameter with the apparel model under a non-default shape parameter corresponding to the pose parameter to obtain a corresponding frame of virtual fitting 3D image, comprising:
and fusing the human body model under each attitude parameter with the clothing model under the non-default shape parameter corresponding to the attitude parameter, and rendering the simulated natural light reflection presented by the fused human body model and the clothing model to obtain a corresponding frame of virtual fitting 3D image.
6. A method of generating a virtual fitting video, comprising:
obtaining a multi-frame virtual fitting 3D image of a clothing commodity based on the method for generating the virtual fitting 3D image of any one of claims 1 to 5;
and obtaining a virtual fitting video of the clothing commodity according to the obtained virtual fitting 3D images, wherein the virtual fitting video is a 2D video or a 3D video.
7. A video playing method applied to E-commerce live broadcast is characterized in that a virtual fitting video of one or more clothing commodities is generated based on the method for generating the virtual fitting video in claim 6; the method comprises the following steps:
Responding to the virtual fitting instruction, and determining the clothing commodity to be fitted;
and playing the virtual fitting video of the clothing commodity to the user.
8. The method of claim 7, wherein determining an item of apparel to be fitted in response to the virtual fitting instructions comprises:
responding to a virtual fitting instruction sent by an E-commerce side system, and determining clothing commodities to be fitted;
or alternatively
And responding to a virtual fitting instruction sent by the client at the user side, and determining the clothing commodity to be fitted.
9. The method of claim 8, further comprising:
and responding to a virtual fitting instruction sent by the client side of the user side, and determining the human body model to be fitted.
10. The method of claim 7, wherein the mannequin to be tried on is a personalized mannequin pre-configured by the user-side client.
11. The method of claim 8, further comprising:
and responding to a virtual fitting instruction sent by the E-commerce side system, and determining the human body model to be fitted.
12. The method of claim 8, wherein there are a plurality of user-side clients;
under the condition that a dress commodity to be tried on is determined in response to a virtual fitting instruction sent by a user side client, playing a virtual fitting video corresponding to the dress commodity to a user, wherein the virtual fitting video comprises:
And only playing the virtual fitting video of the clothing commodity to the user side client.
13. The method of claim 7, where claim 6 depends on claim 4 or 5, the step of generating the virtual fitting video of one or more apparel goods comprises:
aiming at one or more clothing commodities, determining at least one wearing scheme corresponding to the clothing commodity;
aiming at each putting-on scheme, determining other clothing commodities except the clothing commodity in the putting-on scheme;
based on the method for generating virtual fitting video claimed in claim 6, generating the virtual fitting video of the clothing goods corresponding to the fitting scheme.
14. The method as recited in claim 13, further comprising:
responding to the virtual fitting instruction, and determining a fitting scheme specified by the virtual fitting instruction;
the virtual fitting video of the clothing commodity is played to a user, and the virtual fitting video comprises the following steps:
and playing the virtual fitting video corresponding to the fitting scheme of the clothing commodity to the user.
15. The method of claim 7, the step of generating a virtual fitting video of one or more apparel goods comprises:
aiming at one or more clothing commodities, determining a display posture corresponding to the clothing commodity;
Determining a posture parameter sequence of the human body model according to the display posture corresponding to the clothing commodity;
based on the method for generating the virtual fitting video as claimed in claim 6, a virtual fitting video of the clothing goods is generated.
16. The method of claim 7, in a case that the virtual fitting video of the apparel goods is a 2D video, playing the virtual fitting video of the apparel goods to the user includes:
acquiring a first live video stream to be played to a user currently, and fusing a virtual fitting video of the clothing commodity in the first live video stream to obtain a second live video stream;
and playing the second live video stream to the user.
17. The method as claimed in claim 16, wherein the second live video stream includes a first frame region and a second frame region, the first frame region includes a frame of the first live video stream, and the second frame region includes a frame of the virtual try-on video corresponding to the apparel product.
18. A computing device comprising a memory, a processor; the memory for storing computer instructions executable on the processor for implementing the method of any one of claims 1 to 17 when the computer instructions are executed.
19. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 17.
CN202210338163.5A 2022-04-01 2022-04-01 Method for generating virtual fitting 3D image Active CN114445271B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210338163.5A CN114445271B (en) 2022-04-01 2022-04-01 Method for generating virtual fitting 3D image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210338163.5A CN114445271B (en) 2022-04-01 2022-04-01 Method for generating virtual fitting 3D image

Publications (2)

Publication Number Publication Date
CN114445271A CN114445271A (en) 2022-05-06
CN114445271B true CN114445271B (en) 2022-06-28

Family

ID=81358511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210338163.5A Active CN114445271B (en) 2022-04-01 2022-04-01 Method for generating virtual fitting 3D image

Country Status (1)

Country Link
CN (1) CN114445271B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883599B (en) * 2023-07-21 2024-02-06 深圳市十二篮服饰有限公司 Clothing try-on system based on three-dimensional modeling technology

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106502399A (en) * 2016-10-31 2017-03-15 江西服装学院 Virtual fit method, apparatus and system and three-dimensional fabric Materials Library method for building up and device
CN109255687A (en) * 2018-09-27 2019-01-22 姜圣元 The virtual trial assembly system of commodity and trial assembly method
CN110363867A (en) * 2019-07-16 2019-10-22 芋头科技(杭州)有限公司 Virtual dress up system, method, equipment and medium
WO2021227425A1 (en) * 2020-05-12 2021-11-18 浙江大学 Virtual clothing try-on method based on sample
CN114119905A (en) * 2020-08-27 2022-03-01 北京陌陌信息技术有限公司 Virtual fitting method, system, equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120095589A1 (en) * 2010-10-15 2012-04-19 Arkady Vapnik System and method for 3d shape measurements and for virtual fitting room internet service
US20190311423A1 (en) * 2018-04-06 2019-10-10 Lucas Petar Koren Virtual apparel software system and method
CN110264574B (en) * 2019-05-21 2023-10-03 深圳市博克时代科技开发有限公司 Virtual fitting method and device, intelligent terminal and storage medium
RU2019125602A (en) * 2019-08-13 2021-02-15 Общество С Ограниченной Ответственностью "Тексел" COMPLEX SYSTEM AND METHOD FOR REMOTE SELECTION OF CLOTHES

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106502399A (en) * 2016-10-31 2017-03-15 江西服装学院 Virtual fit method, apparatus and system and three-dimensional fabric Materials Library method for building up and device
CN109255687A (en) * 2018-09-27 2019-01-22 姜圣元 The virtual trial assembly system of commodity and trial assembly method
CN110363867A (en) * 2019-07-16 2019-10-22 芋头科技(杭州)有限公司 Virtual dress up system, method, equipment and medium
WO2021227425A1 (en) * 2020-05-12 2021-11-18 浙江大学 Virtual clothing try-on method based on sample
CN114119905A (en) * 2020-08-27 2022-03-01 北京陌陌信息技术有限公司 Virtual fitting method, system, equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
In-home application (App) for 3D virtual garment fitting dressing room;Chenxi Li,et al.;《Multimedia Tools and Applications》;20201004;第5203-5224页 *
基于网络编码和深度学习的云3D虚拟试衣技术;牛牧 等;《绥化学院学报》;20210228;第41卷(第2期);第153-156页 *
生成对抗网络在虚拟试衣中的应用研究进展;张颖 等;《丝绸》;20211231;第58卷(第12期);第63-72页 *
网络虚拟服装试衣系统分析和设计;谭煌;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20100515(第5期);I138-377 *

Also Published As

Publication number Publication date
CN114445271A (en) 2022-05-06

Similar Documents

Publication Publication Date Title
US9811854B2 (en) 3-D immersion technology in a virtual store
Pachoulakis et al. Augmented reality platforms for virtual fitting rooms
US20200066052A1 (en) System and method of superimposing a three-dimensional (3d) virtual garment on to a real-time video of a user
US20170352091A1 (en) Methods for generating a 3d virtual body model of a person combined with a 3d garment image, and related devices, systems and computer program products
WO2021008166A1 (en) Method and apparatus for virtual fitting
Giovanni et al. Virtual try-on using kinect and HD camera
WO2017053625A1 (en) Mapping of user interaction within a virtual-reality environment
CN104618819A (en) Television terminal-based 3D somatosensory shopping system and method
CN104616190A (en) Multi-terminal 3D somatosensory shopping method and system based on internet and mobile internet
WO2013120851A1 (en) Method for sharing emotions through the creation of three-dimensional avatars and their interaction through a cloud-based platform
US11620780B2 (en) Multiple device sensor input based avatar
JP2014509758A (en) Real-time virtual reflection
WO2023226454A1 (en) Product information processing method and apparatus, and terminal device and storage medium
KR20220012285A (en) Commercial system based on light field display system
CN114445271B (en) Method for generating virtual fitting 3D image
Jayamini et al. The use of augmented reality to deliver enhanced user experiences in fashion industry
Patil et al. metaAR–AR/XR shopping app using unity
CN116993432A (en) Virtual clothes information display method and electronic equipment
Welivita et al. Virtual product try-on solution for e-commerce using mobile augmented reality
Bhagyalakshmi et al. Virtual dressing room application using GANs
CN114339434A (en) Method and device for displaying goods fitting effect
Kim et al. A Conceptual Study of Application of Digital Technology to OOH Advertising: Focused on Extended Reality Technology
CN113298898A (en) Customer service image, session image processing method, device and electronic equipment
CN114596412B (en) Method for generating virtual fitting 3D image
CN202904583U (en) 4D clothes try-on simulation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant