CN115375823B - Three-dimensional virtual clothing generation method, device, equipment and storage medium - Google Patents

Three-dimensional virtual clothing generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN115375823B
CN115375823B CN202211290183.6A CN202211290183A CN115375823B CN 115375823 B CN115375823 B CN 115375823B CN 202211290183 A CN202211290183 A CN 202211290183A CN 115375823 B CN115375823 B CN 115375823B
Authority
CN
China
Prior art keywords
clothing
image
dimensional
reconstruction
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211290183.6A
Other languages
Chinese (zh)
Other versions
CN115375823A (en
Inventor
杨少雄
赵晨
陈睿智
刘经拓
孙昊
丁二锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211290183.6A priority Critical patent/CN115375823B/en
Publication of CN115375823A publication Critical patent/CN115375823A/en
Application granted granted Critical
Publication of CN115375823B publication Critical patent/CN115375823B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides a three-dimensional virtual clothing generation method, apparatus, device and storage medium, which relate to the technical field of artificial intelligence, and in particular to the technical fields of augmented reality, virtual reality, computer vision, deep learning and the like. The specific implementation scheme is as follows: acquiring a clothing image; according to the clothing image and clothing prior information related to the clothing image, performing three-dimensional model fitting reconstruction to generate a target clothing model; and performing texture rendering on the target clothing model according to the clothing image to generate a three-dimensional virtual clothing corresponding to the clothing image. Therefore, high-precision reconstruction of the three-dimensional virtual clothes can be achieved based on a single image, and the reconstruction quality of the three-dimensional virtual clothes is improved.

Description

Three-dimensional virtual clothing generation method, device, equipment and storage medium
Technical Field
The present disclosure relates to the technical fields of augmented reality, virtual reality, computer vision, deep learning, and the like in the technical field of artificial intelligence, and in particular, to a method, an apparatus, a device, and a storage medium for generating a three-dimensional virtual garment.
Background
In the technical field of artificial intelligence, an individualized and interesting three-dimensional virtual image can be generated for a user, and a three-dimensional virtual garment can be generated for the three-dimensional virtual image.
In the related art, in order to improve the reconstruction efficiency of the three-dimensional virtual clothes, the three-dimensional virtual clothes can be reconstructed in a mode of carrying out pixel alignment on image data of the clothes.
However, the reconstruction quality of the three-dimensional virtual clothes in the above manner is not good.
Disclosure of Invention
The present disclosure provides a three-dimensional virtual apparel generation method, apparatus, device, and storage medium for improving reconstruction quality of three-dimensional virtual apparel.
According to a first aspect of the present disclosure, there is provided a three-dimensional virtual clothing generation method, including:
acquiring a clothing image;
according to the clothing image and clothing prior information related to the clothing shape, three-dimensional model fitting reconstruction is carried out, and a target clothing model is generated;
and performing texture rendering on the target clothing model according to the clothing image to generate a three-dimensional virtual clothing corresponding to the clothing image.
According to a second aspect of the present disclosure, there is provided a three-dimensional virtual garment generation apparatus, comprising:
the image acquisition unit is used for acquiring a clothing image;
the three-dimensional model building unit is used for carrying out three-dimensional model fitting reconstruction according to the clothing image and clothing prior information related to the clothing shape to generate a target clothing model;
and the texture rendering unit is used for performing texture rendering on the target clothing model according to the clothing image and generating a three-dimensional virtual clothing corresponding to the clothing image.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the three-dimensional virtual apparel generation method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the three-dimensional virtual apparel generation method of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of an electronic device may read the computer program, the at least one processor executing the computer program causing the electronic device to perform the three-dimensional virtual apparel generation method of the first aspect.
According to the technical scheme provided by the disclosure, the construction of the three-dimensional clothing model of the clothing image is realized in a three-dimensional model fitting and reconstruction mode based on the clothing prior information related to the clothing shape, and the accuracy of the three-dimensional clothing model is improved; in consideration of the fact that the texture definition of the three-dimensional clothing model obtained through fitting reconstruction of the three-dimensional model is insufficient, the three-dimensional clothing model is subjected to texture rendering based on the clothing image, the corresponding three-dimensional virtual clothing is generated, and the texture definition of the three-dimensional virtual clothing is improved. Therefore, under the condition of depending on the clothing prior information, the reconstruction of the three-dimensional virtual clothing is realized without depending on a large amount of image data, the generalization capability, the stability and the robustness of the three-dimensional virtual clothing reconstruction are improved, and the reconstruction quality of the three-dimensional virtual clothing is further improved.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic diagram of an application scenario to which the embodiment of the present disclosure is applied;
fig. 2 is a first flowchart illustrating a three-dimensional virtual clothing generation method provided in accordance with an embodiment of the present disclosure;
fig. 3 is a schematic flow chart diagram ii of a three-dimensional virtual clothing generation method provided according to an embodiment of the present disclosure;
fig. 4 is a third schematic flowchart of a three-dimensional virtual clothing generation method provided in accordance with an embodiment of the present disclosure;
fig. 5 is a schematic flow chart diagram of a three-dimensional virtual clothing generation method provided in accordance with an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram i of a three-dimensional virtual clothing generation apparatus provided in the embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a three-dimensional virtual clothing generation apparatus provided in the embodiment of the present disclosure;
fig. 8 is a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the related art, a designer manually designs three-dimensional virtual clothes based on two-dimensional clothes images, the method needs a great deal of time and energy of the designer, the time cost and the labor cost are high, and the batch generation of the three-dimensional virtual clothes cannot be realized. In order to improve the generation efficiency of the three-dimensional virtual clothes, the three-dimensional virtual clothes can be reconstructed in a mode of carrying out pixel alignment on image data of the clothes. For example, a Pixel-Aligned Implicit Function (PIFu) method is used to perform three-dimensional reconstruction of a virtual human body and a virtual dress. However, this approach has the following drawbacks:
the method has the defects that 1, the scheme needs to carry out three-dimensional modeling based on a large amount of image data, seriously depends on the distribution condition of the image data, and is difficult to carry out three-dimensional modeling under the condition of lacking the back image of the clothes, so the generalization of the scheme is insufficient, and the performance on a test data set is poor; defect 2, the quality of the three-dimensional virtual clothes reconstruction is poor, and the texture definition is insufficient; defect 3, modeling the human body and the clothes together, so that the same human body cannot be changed; and 4, the shape of the three-dimensional virtual clothes obtained by reconstruction is relatively broken and is difficult to bind and drive with a virtual human body.
In order to solve the above defects, the present disclosure provides a three-dimensional virtual clothing generation method, which is applied to the technical fields of augmented reality, virtual reality, computer vision, deep learning, and the like in the technical field of artificial intelligence. In the three-dimensional virtual clothing generation method, three-dimensional model fitting reconstruction is carried out on the basis of the clothing image and clothing related to the clothing shape, the problem that the clothing shape information in the clothing image is insufficient is solved, on one hand, the shape integrity and the accuracy of the three-dimensional virtual clothing obtained through reconstruction are improved, on the other hand, the virtual clothing is independently modeled without being modeled together with an object wearing the clothing, the virtual dress change of the same object is facilitated, and the defects 3 and 4 are overcome. After the three-dimensional clothing model is obtained, texture rendering is carried out on the three-dimensional clothing model based on the clothing image, a corresponding three-dimensional virtual clothing is generated, the texture definition of the three-dimensional virtual clothing is improved, and the defect 2 is overcome.
Therefore, under the condition of depending on the clothing image and clothing prior information, the reconstruction of the three-dimensional virtual clothing is realized without depending on a large amount of image data, the stability and robustness of the three-dimensional virtual clothing generation are effectively improved, the reconstruction quality of the three-dimensional virtual clothing is improved, and the defect 1 is solved.
Fig. 1 is a schematic diagram of an application scenario applicable to the embodiment of the present disclosure. In an application scenario, the related devices include an electronic device for three-dimensional virtual clothing generation, where the electronic device may be a server or a terminal, and fig. 1 takes the electronic device for three-dimensional virtual clothing generation as a server 101 as an example. On the server 101, the server 101 may use the three-dimensional virtual clothing generation method provided by the embodiment of the present disclosure to process the clothing image, and generate a three-dimensional virtual clothing corresponding to the clothing image.
Optionally, an electronic device for requesting generation of the three-dimensional virtual clothes may be further involved in the application scenario, and the electronic device may be a server or a terminal. Fig. 1 takes an electronic device requesting to generate a three-dimensional virtual garment as an example of a terminal 102. The terminal 102 may send the clothing image to the server 101, request the server 101 to generate a three-dimensional virtual clothing corresponding to the clothing image; after generating the three-dimensional virtual garment, the server 101 may transmit the three-dimensional virtual garment to the terminal 102.
The server 101 may be a centralized server, a distributed server, or a cloud server. The terminal 102 may be a Personal Digital Assistant (PDA) device, a handheld device (e.g., a smart phone or a tablet computer) with a wireless communication function, a computing device (e.g., a Personal Computer (PC)), an in-vehicle device, a wearable device (e.g., a smart watch or a smart band), and a smart home device (e.g., a smart sound box or a smart display device).
In an example, the application scene is a virtual world (e.g., a metas) scene, a whole-body avatar is generated for each participant in the virtual world, and in the generation process of the avatar, a virtual garment can be generated by using the technical solution provided by the embodiments of the present disclosure.
In another example, the application scene is a live broadcast scene, an avatar serving as a live broadcast assistant may be generated based on one frame of image frame in a live broadcast video, and in a process of generating the avatar, a virtual garment may be generated by using the technical solution provided by the embodiment of the present disclosure.
In addition, the application scene may also be a game scene, an offline image processing scene, an offline video processing scene, and the like, and these application scenes are not described one by one here.
The following describes the technical solutions of the present disclosure and how to solve the above technical problems in detail with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
Fig. 2 is a first flowchart of a three-dimensional virtual clothing generation method according to an embodiment of the present disclosure. As shown in fig. 2, the three-dimensional virtual clothing generation method includes:
s201, obtaining a clothing image.
Wherein, a dress image can correspond to a piece of dress, for example, the dress image is a short-sleeve image, a shorts image or a dress image. Therefore, the three-dimensional virtual clothes are independently generated for the single clothes based on the single clothes image, data processing in the three-dimensional virtual clothes generating process is more delicate, the reconstruction quality of the three-dimensional virtual clothes is improved, and the independent three-dimensional virtual clothes are beneficial to virtual clothes changing.
In this embodiment, one or more images of the garment may be acquired. If a clothing image is obtained, a three-dimensional virtual clothing corresponding to the clothing image can be generated through the embodiment; if a plurality of clothing images are obtained, the three-dimensional virtual clothing corresponding to each clothing image can be generated through the embodiment.
And S202, performing three-dimensional model fitting reconstruction according to the clothing image and clothing prior information related to the clothing shape to generate a target clothing model.
Among other things, the apparel priors associated with the apparel shape may include apparel shape parameters, such as coordinates of vertices of the apparel, lengths of edges of the apparel, and so forth. The clothing prior information is preset data, or clothing prior information input by a user can be acquired.
Optionally, the apparel prior information related to the apparel shape may also include an apparel style parameter and/or an apparel pose parameter. Clothing style parameters are, for example, parameters representing clothing such as long sleeves, short sleeves, trousers, shorts, and the like; the apparel pose parameters reflect the pose of the object wearing the apparel, e.g., the apparel pose parameters include coordinate information for various key points on the apparel. Therefore, prior information related to the shape is provided from a plurality of angles such as the shape, the style and the posture, the accuracy of fitting and reconstruction of the three-dimensional model is improved, the accuracy of the target clothing model is further improved, and the accuracy of the three-dimensional virtual clothing is improved.
In the embodiment, fitting reconstruction of the three-dimensional model can be performed based on clothing prior information related to the clothing shape to obtain the three-dimensional clothing model, whether the three-dimensional clothing model meets the reconstruction requirement or not is determined according to the clothing image, and if the three-dimensional clothing model meets the reconstruction requirement, the target clothing can be determined to be the three-dimensional clothing model; otherwise, the clothing prior information can be adjusted, and the three-dimensional model reconstruction is continued until the three-dimensional clothing model meets the reconstruction requirement. Therefore, the three-dimensional clothing model is continuously optimized by continuously optimizing the clothing prior information, the accuracy of three-dimensional clothing model reconstruction is improved, and the accuracy of the target clothing model is improved.
And S203, performing texture rendering on the three-dimensional clothing model according to the clothing image, and generating a three-dimensional virtual clothing corresponding to the clothing image.
In this embodiment, the three-dimensional clothing model mainly reflects a three-dimensional contour of clothing in the clothing image, and the texture definition of the three-dimensional clothing model is yet to be improved, so that after the three-dimensional clothing model is obtained, the three-dimensional clothing model can be subjected to texture rendering based on the clothing image reflecting the clothing texture, that is, pixel values of a plurality of three-dimensional coordinate points on the three-dimensional clothing model are determined, and finally, the three-dimensional clothing model with clear texture, that is, the three-dimensional virtual clothing corresponding to the clothing image is generated, so that the texture definition of the three-dimensional virtual clothing is effectively improved.
In the embodiment of the disclosure, the dependence of the three-dimensional virtual clothes generation on image data is reduced through the three-dimensional model fitting reconstruction, the generalization, stability and robustness of the three-dimensional virtual clothes generation are improved, and the shape accuracy of the three-dimensional virtual clothes is improved. After the three-dimensional clothing model is constructed, texture rendering is carried out on the target clothing model based on the clothing image, the three-dimensional virtual clothing is obtained, and the texture definition of the three-dimensional virtual clothing is improved. Therefore, the reconstruction quality of the three-dimensional virtual clothes is improved from two angles of shape and texture. Because the three-dimensional virtual clothes can be independently generated for a single clothes image, the clothes modeling is separated from the modeling of the object wearing the clothes, so that the three-dimensional virtual clothes can be used for being bound and driven with different objects, and further can be used for realizing the virtual clothes changing of the same object.
In the following, possible implementations of the various steps of the foregoing embodiments are provided:
in some embodiments, one possible implementation of S201 may include: acquiring an initial image, wherein a target object in the initial image wears a real dress; and identifying and segmenting clothes of the initial image to obtain a clothes image. Therefore, the clothing image of the single clothing is obtained through clothing identification and segmentation, the accuracy of clothing image acquisition is improved, meanwhile, the clothing is separated from the target object, the three-dimensional modeling of the clothing is separated from the three-dimensional modeling of the target object, so that the three-dimensional virtual clothing is independently generated for the single clothing, the meticulous degree of the three-dimensional virtual clothing is improved, and the clothing image can be further used for realizing virtual dress changing.
Wherein the target object may be a person and the initial image may be an image of the person.
In this embodiment, an initial image may be obtained from public image data or video data; alternatively, an initial image input by a user may be received; alternatively, the initial image may be obtained from the three-dimensional virtual garment generation request in the case where the three-dimensional virtual garment generation request is received. After the initial image is obtained, a target segmentation network can be adopted to detect and segment a target object of the initial image to obtain a target object image, and clothing identification and segmentation are carried out on the target object image to obtain at least one clothing image. The target segmentation network is a neural network, and the segmented target is clothing. Here, the specific network structure of the target segmentation network and the identification segmentation process of the apparel are not limited.
In this embodiment, the target object in the target object image may wear one or more pieces of real clothing, and when the target object wears the plurality of pieces of real clothing, the plurality of clothing images may be obtained by identifying and segmenting from the target object image, and one clothing image corresponds to one piece of real clothing. Therefore, the clothes and the target object are separated and different clothes are separated by identifying and segmenting the clothes image, and further, the single three-dimensional virtual clothes are generated for the single real clothes.
Besides the above modes, the clothing image can be obtained by receiving the clothing image input by the user, receiving the clothing image sent by another device, collecting the public clothing image from the network and the like.
In some embodiments, in a possible implementation manner of S202, the clothing image is used to adjust the clothing prior information, so that the clothing prior information may gradually approach the shape, style, and posture of the clothing in the clothing image along with the increase of the adjustment times, and the accuracy of adjusting the clothing prior information is improved, thereby improving the accuracy and convergence rate of reconstructing the target clothing model.
Based on that the clothing image is used for adjusting clothing prior information in the generation process of the target clothing model, fig. 3 is a flow diagram two of the three-dimensional virtual clothing generation method provided according to the embodiment of the disclosure. As shown in fig. 3, the three-dimensional virtual clothing generation method includes:
and S301, acquiring a clothing image.
The implementation principle and the technical effect of S301 may refer to the foregoing embodiments, and are not described again.
S302, fitting and reconstructing the three-dimensional model for the Nth time according to the clothing prior information after the Nth-1 th time of adjustment to obtain the three-dimensional clothing model obtained by the Nth time of reconstruction.
Wherein N is greater than or equal to 1.N represents the number of fitting reconstruction times, and therefore the value of N increases as the number of fitting reconstruction times increases from 1.
In this embodiment, when N =1, that is, in the process of the first three-dimensional model fitting reconstruction, the initial clothing prior information may be obtained, and the first three-dimensional model fitting reconstruction is performed according to the initial clothing prior information to obtain the three-dimensional clothing model obtained by the first reconstruction. The initial clothing prior information can be randomly determined, and can also be preset by a professional. When N =2, namely in the process of fitting and reconstructing the second three-dimensional model, the initial clothing prior information can be obtained, and fitting and reconstructing the second three-dimensional model are carried out according to the initial clothing prior information to obtain the three-dimensional clothing model obtained by secondary reconstruction. So, and so on, not all are described. In the process of the Nth time of fitting and reconstructing the three-dimensional model, the (N-1) th time of adjusting clothing prior information can be input into a clothing modeling network, the clothing prior information is subjected to feature extraction in the clothing modeling network, three-dimensional rendering is carried out on the basis of the extracted features, and the three-dimensional clothing model obtained by the Nth time of reconstruction is generated. Because the clothing modeling network is a gradient transmissible network, the three-dimensional rendering realized based on the clothing modeling network is micro-renderable, and the accuracy of the three-dimensional clothing model can be effectively improved through the micro-renderable.
In a possible implementation manner, the clothing modeling network may include a coding network and a differentiable network, after the clothing prior information is input to the clothing modeling network, the clothing prior information is extracted for features through the coding network, and three-dimensional rendering is performed through the differentiable network based on the extracted features. Therefore, feature extraction and micro-rendering are realized, and the accuracy of the three-dimensional clothing model is improved.
In addition, other three-dimensional modeling networks can be adopted as the clothing modeling network, which is not repeated.
And S303, determining whether the three-dimensional clothing model obtained by the Nth reconstruction meets the reconstruction requirement or not according to the clothing image.
In this embodiment, considering that the initial clothing prior information may not be accurate, the generated three-dimensional clothing model is far from the three-dimensional clothing model actually corresponding to the clothing image, and therefore, after the three-dimensional clothing model is generated, it is possible to determine whether the three-dimensional clothing model meets the reconstruction requirement by comparing the three-dimensional clothing model with the clothing image, that is, determine whether the three-dimensional clothing model is close to the three-dimensional clothing model actually corresponding to the clothing image. And if the three-dimensional clothing model obtained by the Nth reconstruction meets the reconstruction requirement, executing S304, otherwise, executing S306.
In one possible implementation, S303 may include: performing two-dimensional projection on the three-dimensional clothing model obtained by the Nth reconstruction to obtain a projection image; determining a shape difference between the projected image and the apparel image; and if the shape difference between the projection image and the clothing image is smaller than the difference threshold value, determining that the three-dimensional clothing model obtained by the Nth reconstruction meets the reconstruction requirement, otherwise determining that the three-dimensional clothing model obtained by the Nth reconstruction does not meet the reconstruction requirement.
In the implementation mode, the three-dimensional clothing model obtained by the Nth reconstruction is subjected to two-dimensional projection to obtain a two-dimensional projection image in consideration of the fact that the three-dimensional clothing model and the clothing image have different dimensions; the three-dimensional clothing model is reconstructed mainly by establishing the three-dimensional shape of clothing in the clothing image, and the shape of the two-dimensional projection image is compared with that of the two-dimensional clothing image to obtain the shape difference between the projection image and the clothing image. If the shape difference between the projected image and the clothing image is smaller than the difference threshold value, the shape of the projected image is close to the shape of the clothing image, the shape of the three-dimensional clothing model obtained through the Nth reconstruction is close to the shape of the three-dimensional clothing model actually corresponding to the clothing image, the three-dimensional clothing model obtained through the Nth reconstruction is determined to meet the reconstruction requirement, otherwise, the three-dimensional clothing model obtained through the Nth reconstruction is determined not to meet the reconstruction requirement, and the (N + 1) th reconstruction is required.
Therefore, the method for projecting the three-dimensional clothing model and comparing the shape of the two-dimensional projection image with that of the two-dimensional clothing image improves the accuracy for determining whether the three-dimensional clothing model obtained by the Nth reconstruction meets the reconstruction requirement or not, and further improves the reconstruction accuracy of the three-dimensional clothing model.
In another possible implementation manner, in S303, it may be determined whether the type of the garment to which the garment image belongs is consistent with the type of the garment to which the three-dimensional garment model belongs, and if the type of the garment to which the garment image belongs is consistent with the type of the garment to which the three-dimensional garment model belongs, it is determined that the three-dimensional garment model obtained by the nth reconstruction satisfies the reconstruction requirement, otherwise, it is determined that the three-dimensional garment model obtained by the nth reconstruction does not satisfy the reconstruction requirement. Or in S303, the clothing feature of the clothing image may be extracted, the clothing feature of the three-dimensional clothing model is extracted, and by comparing the clothing feature of the clothing image with the clothing feature of the three-dimensional clothing model, if the similarity between the clothing feature of the clothing image and the clothing feature of the three-dimensional clothing model is greater than a similarity threshold, it is determined that the three-dimensional clothing model obtained by the nth reconstruction satisfies the reconstruction requirement, otherwise, it is determined that the three-dimensional clothing model obtained by the nth reconstruction does not satisfy the reconstruction requirement. Therefore, the accuracy of determining whether the three-dimensional clothing model obtained by the Nth reconstruction meets the reconstruction requirement or not is improved through clothing type comparison or clothing characteristic comparison, and the accuracy of three-dimensional clothing model reconstruction is further improved.
In one possible implementation, the target clothing model may be obtained by constraining the number of reconstructions in addition to the manner provided in S303. Specifically, whether N is larger than M or not is judged, if N is larger than M, the three-dimensional clothing model obtained through the Nth reconstruction is determined to meet the reconstruction requirement, and otherwise, the three-dimensional clothing model obtained through the Nth reconstruction does not meet the reconstruction requirement. Therefore, the reconstruction frequency is prevented from being too much, the reconstruction effect of the three-dimensional clothing model is poor, and the efficiency is too low.
S304, determining the target clothing model as the three-dimensional clothing model obtained by the Nth reconstruction.
In this embodiment, after determining that the target clothing model is the three-dimensional clothing model obtained by the nth reconstruction, S305 may be executed to perform the next texture rendering operation.
And S305, performing texture rendering on the target clothes model according to the clothes image, and generating a three-dimensional virtual clothes corresponding to the clothes image.
The implementation principle and the technical effect of S305 may refer to the foregoing embodiments, and are not described again.
S306, based on the shape difference between the clothing image and the three-dimensional clothing model obtained by the Nth reconstruction, clothing prior information after the Nth-1 th adjustment is adjusted to obtain clothing prior information after the Nth adjustment, and N is subjected to an operation of adding one, and the three-dimensional model is subjected to fitting reconstruction in the next time.
In this embodiment, under the condition that the three-dimensional clothing model obtained by the nth reconstruction does not meet the reconstruction requirement, the shape difference between the clothing image and the three-dimensional clothing model obtained by the nth reconstruction may be used as a model reconstruction loss value, an adjustment value corresponding to the clothing prior information adjusted for the N-1 st time is determined, the clothing prior information adjusted for the N-1 st time is adjusted based on the adjustment value, and the clothing prior information adjusted for the N time is obtained, wherein a specific process of determining the adjustment value corresponding to the clothing prior information adjusted for the N-1 st time based on the shape difference between the clothing image and the three-dimensional reconstruction model obtained by the nth reconstruction is not limited here. While adjusting the clothing prior information, N may be subjected to an addition operation for performing the next three-dimensional model fitting reconstruction, i.e., after performing S306, the process jumps to perform S302.
Therefore, through repeated fitting reconstruction of the three-dimensional model, the three-dimensional clothing model obtained through reconstruction is closer to the three-dimensional clothing model corresponding to the clothing image, the reconstruction accuracy of the three-dimensional clothing model is improved, and the accuracy of the three-dimensional virtual clothing is further improved.
In one possible implementation, the shape difference between the clothing image and the three-dimensional clothing model obtained by the nth reconstruction may include the shape difference between the clothing image and the projection image of the three-dimensional clothing model obtained by the nth reconstruction. Therefore, the accuracy of the shape difference is improved through the two-dimensional projection image and the two-dimensional clothing image, and the accuracy of the clothing prior information adjustment is further improved.
In the embodiment of the disclosure, fitting and reconstructing a three-dimensional model based on clothing prior information to generate a three-dimensional clothing model, determining whether the three-dimensional clothing model meets reconstruction requirements according to clothing images, if so, determining the three-dimensional clothing model obtained by reconstruction as a target clothing model, otherwise, adjusting the clothing prior information according to the shape difference between the clothing images and the three-dimensional clothing model, and fitting and reconstructing the next three-dimensional model according to the adjusted clothing prior information. Therefore, fitting and reconstruction of the three-dimensional model are carried out for many times, the accuracy of the clothing prior information is higher and higher, the accuracy of the three-dimensional clothing model is higher and higher, and the accuracy of three-dimensional shape modeling for the clothing image is effectively improved. And then, texture rendering is carried out on the target clothing model based on the clothing image, so that the texture accuracy of the target clothing model is improved. Therefore, the quality of the three-dimensional virtual clothes is effectively improved without depending on a large amount of image data, and the generalization, stability and robustness of the three-dimensional virtual clothes generation are improved.
In some embodiments, considering that the clothing image plays a role in shape supervision to some extent in the three-dimensional model fitting reconstruction, the shape of the clothing image can be adjusted before the three-dimensional model fitting reconstruction, so as to improve the shape accuracy of the clothing image.
Based on the shape adjustment of the clothing image before the three-dimensional model fitting reconstruction, fig. 4 is a third flow diagram of the three-dimensional virtual clothing generation method provided according to the embodiment of the present disclosure. As shown in fig. 4, the three-dimensional virtual clothing generation method includes:
s401, obtaining a clothing image.
The implementation principle and the technical effect of S401 may refer to the foregoing embodiments, and are not described again.
S402, determining a target shape reference image corresponding to the clothing image.
Wherein the target shape reference image may reflect a clothing shape of the clothing image.
The target shape reference image may include a shape pattern reflecting a shape of the clothing image, for example, the clothing image is a short-sleeve image, and a shape of the shape pattern in the target shape reference image is a short-sleeve shape. Alternatively, the image shape of the target shape reference image is a dress shape of the dress image. For example, the clothing image is a shorts image, and the image shape of the target shape reference image is a shorts shape.
In this embodiment, considering that the clothing image reflects the clothing from one viewing angle, the shape of the clothing may not be accurately reflected, and the three-dimensional virtual clothing relates to displaying the clothing from a plurality of viewing angles, in order to improve the shape accuracy of the clothing image, the clothing shape reference image reflecting the clothing shape of the clothing image may be determined as the target shape reference image in a plurality of clothing shape reference images.
The plurality of clothing shape reference images can be preset or acquired in advance, and different clothing shapes are reflected by different clothing shape reference images.
In one possible implementation, S402 includes: determining a clothing category to which the clothing image belongs; and according to the clothing category, determining a target shape reference image corresponding to the clothing image in the clothing shape reference image. Therefore, the target shape reference image reflecting the clothing shape of the clothing image is determined based on the clothing category to which the clothing image belongs, and the accuracy of the target shape reference image is improved.
The apparel categories are short sleeves, long sleeves, vests, shorts, trousers, etc., among others.
The corresponding relation between the clothing category and the clothing shape reference image can be preset, different clothing categories correspond to different clothing shape reference images, and the clothing shape reference image corresponding to the clothing category is used for reflecting the clothing shape of clothing under the clothing category.
Wherein, the dress shape reference image can comprise dress shape patterns reflecting dress shapes; alternatively, the image shape of the garment shape reference image may reflect the garment shape.
In the implementation mode, the clothing category to which the clothing image belongs can be identified, namely the clothing category to which the clothing in the clothing image belongs; and according to the corresponding relation between the clothing category and the clothing shape reference image, determining the clothing shape reference image corresponding to the clothing category to which the clothing image belongs in the clothing shape reference image, wherein the clothing shape reference image is a target shape reference image. Therefore, different clothing shape reference images are set for different clothing categories, the accuracy of the clothing shape reference images is improved, and the accuracy of the target shape reference images corresponding to the clothing images is further improved.
In yet another possible implementation, S402 includes: and matching the clothing image with the clothing shape reference image, and determining the clothing shape reference image with the highest matching degree with the clothing image as a target shape reference image corresponding to the clothing image according to the matching degree of the clothing image and the clothing shape reference image. Therefore, the accuracy of the target shape reference image is improved through image matching.
And S403, adjusting the shape of the clothing image according to the target shape reference image.
In this embodiment, after the target shape reference image is obtained, the clothing shape of the clothing image is adjusted. For example, when the clothing in the clothing image is short sleeves, the target shape reference image gives a complete short sleeve shape, and the clothing shape of the clothing image can be adjusted according to the short sleeve shape reflected by the target shape reference image. Therefore, the accuracy and the completeness of the clothing shape reflected by the clothing image are improved, and the accuracy of a subsequent target clothing model is improved.
In a possible implementation manner, a shape adjustment network may be adopted, the target shape reference image and the clothing image are input into the shape adjustment network, and the shape feature of the clothing image is adjusted based on the shape feature of the target shape reference image, so as to obtain the clothing image after the shape adjustment. The shape adjusting network can adopt a generating type countermeasure network, the shape of the clothing image is adjusted through a generator in the generating type countermeasure network to obtain the clothing image with the adjusted shape, a discriminator in the generating type countermeasure network discriminates the clothing image with the adjusted shape based on the target reference image, and the clothing image with the adjusted shape is obtained after multiple times of adjustment. Therefore, the accuracy of shape adjustment of the clothing image is improved through the generative confrontation network.
S404, performing three-dimensional model fitting reconstruction according to the clothing image and clothing prior information related to the clothing shape to generate a target clothing model.
In this embodiment, after the shape-adjusted clothing image is obtained, three-dimensional model fitting reconstruction is performed based on the shape-adjusted clothing image and clothing prior information related to the clothing shape, so as to generate a target clothing model. The implementation principle and the technical effect of generating the target clothing model may refer to the foregoing embodiments, and are not described herein again.
And S405, performing texture rendering on the target clothes model according to the clothes image, and generating a three-dimensional virtual clothes corresponding to the clothes image.
The implementation principle and the technical effect of S405 may refer to the foregoing embodiments, and are not described again.
In the embodiment of the disclosure, the shape of the clothing image is adjusted according to the target shape reference image with a more complete and accurate clothing shape, the accuracy and the integrity of the clothing shape of the clothing image are improved, three-dimensional model fitting reconstruction is performed on the basis of the clothing image after the shape adjustment and clothing prior information related to the clothing shape, a target clothing model is generated, and the accuracy of the target clothing model is improved. And then, performing texture rendering on the target clothing model based on the clothing image, so that the texture accuracy of the target clothing model is improved. Therefore, the quality of the three-dimensional virtual clothes is effectively improved without depending on a large amount of image data, and the generalization, stability and robustness of the three-dimensional virtual clothes generation are improved.
In some embodiments, besides the shape adjustment of the clothing image, before the three-dimensional model fitting, the posture adjustment of the clothing image can be performed, so that the clothing posture in the clothing image can be adjusted to a more ideal target posture, and the clothing shape and the clothing texture can be more clearly embodied.
Based on pose adjustment of the clothing image before three-dimensional model fitting, fig. 5 is a fourth flow diagram of the three-dimensional virtual clothing generation method provided according to the embodiment of the present disclosure. As shown in fig. 5, the three-dimensional virtual clothing generation method includes:
s501, obtaining an initial image, wherein a target object in the initial image wears a real dress.
The implementation principle and the technical effect of S501 may refer to the foregoing embodiments, and are not described again.
S502, detecting the attitude information of the target object in the initial image.
The pose information of the target object may include image coordinates of a plurality of key points on the target object.
Wherein the target object may be a person, and the initial image may be an image of the person
In this embodiment, the target object may be detected and cropped in the initial image to obtain the target object image, and the pose of the target object image may be estimated to obtain the pose information of the target object. When the target object is a human, the human body frame detection and the cutting can be carried out in the initial image to obtain a human body image, and the human body posture estimation is carried out in the human body image to obtain the posture information of the human body in the initial image, namely the image coordinates of a plurality of key points on the human body.
And S503, clothing identification and segmentation are carried out on the initial image to obtain a clothing image.
The implementation principle and technical effect of S503 may refer to the foregoing embodiments, and are not described again.
S502 and S503 may be executed first, or S503 and S502 may be executed first, or S502 and S503 may be executed simultaneously.
S504, according to the posture information of the target object, posture adjustment is carried out on the clothing image, so that the clothing posture in the clothing image is corrected to the target posture.
In this embodiment, after obtaining the pose information of the target object, the pose of the target object may not conform to the target pose, for example, the target pose is that the human body is standing upright, and the pose of the target object in the initial image is that the human body is standing aslant, or for example, the target pose is that the hand is facing vertically downward, and the hand of the target object in the initial image is encircling the chest. Therefore, the posture information of the target object can be subjected to posture adjustment based on the target posture, and the adjusted posture information of the target object is obtained; and performing posture adjustment on the clothing image based on the adjusted posture information of the target object.
In this embodiment, in the process of performing pose adjustment on the pose information of the target object based on the target pose to obtain the adjusted pose information of the target object, the image positions of the corresponding key in the pose information of the target object may be adjusted based on the target position of each key point in the target pose to obtain the adjusted pose information of the target object. In the process of adjusting the posture of the clothing image based on the adjusted posture information of the target object, each key point on the clothing image has a corresponding relation with each key point on the target object, so that the image position of each key point on the clothing image can be adjusted based on the image position of each key point in the adjusted posture information of the target object. For example, if the key point a and the key point B in the adjusted posture information of the target object are located on the same horizontal line, the key point C corresponding to the key point a and the key point D corresponding to the key point B on the clothing image are adjusted to be located on the same horizontal line. Therefore, the clothing posture is adjusted through the adjustment of the key points, and the accuracy of posture adjustment of the clothing image is improved.
In one possible implementation, S504 includes: performing image alignment on the clothes image based on the posture information of the target object; and based on the posture information, carrying out texture migration on the clothes image after the image alignment to obtain the clothes image after the posture adjustment. Therefore, the accuracy of posture adjustment is improved through image alignment and texture migration, and the texture accuracy of the adjusted clothes image is ensured.
In this embodiment, in the process of performing image alignment on the clothing image based on the posture information of the target object, the process of adjusting the key points on the clothing image may be performed by referring to the key points in the posture information based on the target object, and details are not repeated. After the images of the clothes are aligned, the clothes posture in the clothes images is changed, and the texture on the clothes images is changed correspondingly along with the posture, so that the clothes images after the images are aligned are subjected to texture migration based on the posture information, and the reasonability and the accuracy of the texture on the clothes images after the posture adjustment are improved.
The texture migration may be implemented by migrating the texture at the original image position of the key point on the clothing image to the image position after the key point is adjusted, or implemented by using a neural network, and the specific process of the texture migration is not limited and described in detail herein.
In a possible implementation manner, after performing pose adjustment on the apparel image according to the pose information, the method further includes: performing texture completion on the shielded area of the clothing image after the posture adjustment; and performing super-resolution reconstruction on the clothing image after the texture completion. Therefore, the image quality of the clothes image is improved through texture completion and super-resolution reconstruction, and the quality of the three-dimensional virtual clothes is further improved.
In this embodiment, for some large-area blocked areas, a better effect cannot be achieved by texture migration alone, in order to improve the texture integrity of the clothing image and solve the problem that the blocked area of the clothing image after the posture adjustment lacks texture, the texture completion is performed on the blocked area of the clothing image after the posture adjustment, for example, the texture which can be used for the blocked area is determined in the surrounding area of the blocked area, the texture completion is performed on the blocked area by using the texture, and if the texture completion is performed on the clothing image, the area which is symmetrical to the blocked area is determined, the texture completion is performed on the blocked area by using the texture of the area, and in addition, other texture completion modes can be adopted without repeated description. After the clothing image after the texture completion is obtained, the clothing image after the texture completion is subjected to super-resolution reconstruction by using a super-resolution reconstruction method, so that the image resolution of the clothing image is improved, and the image quality is improved.
It should be noted that, performing shape adjustment on the clothing image, performing posture adjustment on the clothing image, performing texture completion on the clothing image, and performing super-resolution reconstruction on the clothing image are all ways to improve the image quality of the clothing image, and therefore, only one way or a combination of at least two ways can be adopted to improve the image quality of the clothing image.
And S505, performing three-dimensional model fitting reconstruction according to the clothing image and clothing prior information related to the clothing shape to generate a target clothing model.
And S506, performing texture rendering on the target clothes model according to the clothes image, and generating a three-dimensional virtual clothes corresponding to the clothes image.
The implementation principle and the technical effect of S505 and S506 may refer to the foregoing embodiments, and are not described again. In S505, the clothing image may be the posture-adjusted clothing image, or may be the texture-supplemented clothing image, or may be the super-resolution reconstructed clothing image.
In the embodiment of the disclosure, the posture of the clothing image is adjusted according to the posture information of the target object in the initial image and the target posture, and the clothing posture of the clothing image is improved, so that the clothing image completely and accurately reflects the shape of the clothing; performing three-dimensional model fitting reconstruction based on the clothing image and clothing prior information related to the clothing shape to generate a target clothing model, and improving the accuracy of the target clothing model; texture rendering is carried out on the target clothing model based on the clothing image, and the texture accuracy of the target clothing model is improved. Therefore, the quality of the three-dimensional virtual clothes is effectively improved without depending on a large amount of image data, and the generalization, stability and robustness of the three-dimensional virtual clothes generation are improved.
In the following, embodiments of texture rendering of a target apparel model are provided.
In some embodiments, texture rendering the target garment model from the garment image, generating a three-dimensional virtual garment corresponding to the garment image, comprises: generating a texture image according to the clothing image and the target clothing model; and rendering the texture image on the target clothes model to obtain the three-dimensional virtual clothes.
In the embodiment, because the clothing image has the texture information of the clothing, the texture image suitable for the target clothing model can be generated based on the clothing image; and after the texture image is generated, performing texture rendering on the target clothes model, enriching clothes textures on the target clothes model, obtaining the target clothes model after the texture rendering, and obtaining the three-dimensional virtual clothes. Therefore, after the target clothing model is obtained, considering that the three-dimensional model fitting modeling mode mainly focuses on modeling of clothing shapes, namely on clothing outlines, and the texture information is insufficient, in order to solve the problem, texture rendering is carried out on the target clothing model based on the clothing images containing abundant clothing textures, and the texture definition of the three-dimensional virtual clothing is improved.
In some embodiments, from the apparel image and the target apparel model, generating a texture image comprises: performing two-dimensional projection on the target clothing model to obtain a two-dimensional grid image; and performing affine transformation on the clothing image according to the corresponding relation between the pixel coordinates of the two-dimensional grid image and the texture coordinates of the target clothing model to obtain a texture image.
In this embodiment, since the clothing image is a two-dimensional image and the target clothing model is a three-dimensional model, in order to obtain a texture image suitable for the three-dimensional target clothing model based on the two-dimensional clothing model, a correspondence between two-dimensional coordinates and three-dimensional coordinates needs to be established, in other words, a correspondence between pixel coordinates and texture coordinates on the three-dimensional clothing model needs to be established. In order to establish the corresponding relation between the pixel coordinates and the texture coordinates, the target clothing model can be subjected to two-dimensional projection to obtain a dense two-dimensional grid image. Because the target clothing model comprises a plurality of three-dimensional vertex coordinates and a plurality of polygonal patches (such as triangular patches and quadrilateral patches), the target clothing model is subjected to two-dimensional projection, and two-dimensional grid images corresponding to the polygonal patches respectively can be obtained. The process of performing two-dimensional projection on the target clothing model can comprise two processes of segmenting a quadrilateral surface patch into triangular surface patches and projecting the triangular surface patches to obtain a two-dimensional grid image. The corresponding relation between the two-dimensional top points on the two-dimensional network image and the three-dimensional top points on the surface patches can be obtained through two-dimensional projection, namely the corresponding relation between the two-dimensional coordinates and the three-dimensional coordinates is established, and the corresponding relation between the pixel coordinates and the texture coordinates is also established. Then, based on the corresponding relations, the clothing image is subjected to radiation transformation, namely the clothing image is transformed from a two-dimensional space to a three-dimensional space, so that a texture image is obtained.
Therefore, the two-dimensional clothes image is converted into the texture image suitable for the three-dimensional target clothes model in a mode of determining the corresponding relation between the pixel coordinates and the texture coordinates, the accuracy of the texture image is improved, and the definition of the texture on the three-dimensional virtual clothes is further improved.
Fig. 6 is a schematic structural diagram of a three-dimensional virtual clothing generation apparatus provided in the embodiment of the present disclosure. As shown in fig. 6, the three-dimensional virtual clothing generation apparatus 600 includes:
an image acquisition unit 601 configured to acquire a clothing image;
a three-dimensional model constructing unit 602, configured to perform fitting reconstruction on a three-dimensional model according to the clothing image and clothing prior information related to the clothing shape, and generate a target clothing model;
and a texture rendering unit 603, configured to perform texture rendering on the target clothing model according to the clothing image, and generate a three-dimensional virtual clothing corresponding to the clothing image.
Fig. 7 is a schematic structural diagram of a three-dimensional virtual clothing generation apparatus provided in the embodiment of the present disclosure. As shown in fig. 7, the three-dimensional virtual clothing generation apparatus 700 includes:
an image acquisition unit 701 configured to acquire a clothing image;
the three-dimensional model building unit 702 is configured to perform three-dimensional model fitting reconstruction according to the clothing image and clothing prior information related to the clothing shape to generate a target clothing model;
the texture rendering unit 703 is configured to perform texture rendering on the target clothing model according to the clothing image, and generate a three-dimensional virtual clothing corresponding to the clothing image.
The clothing image is used for adjusting clothing prior information.
In some embodiments, as shown in fig. 7, the three-dimensional model building unit 702 includes: the model reconstruction module 7021 is configured to perform fitting reconstruction on the three-dimensional model for the nth time according to the clothing prior information after the N-1 th time of adjustment, so as to obtain a three-dimensional clothing model obtained by the nth time of reconstruction, where N is greater than or equal to 1; the model judging module 7022 is configured to determine whether the three-dimensional clothing model obtained by nth reconstruction meets the reconstruction requirement according to the clothing image; the model determining module 7023 is configured to determine the target clothing model as the three-dimensional clothing model obtained by the nth reconstruction if the three-dimensional clothing model obtained by the nth reconstruction meets the reconstruction requirement; the information adjusting module 7024 is configured to, if the three-dimensional clothing model obtained by the nth reconstruction does not meet the reconstruction requirement, adjust the clothing prior information after the nth-1 adjustment based on a shape difference between the clothing image and the three-dimensional clothing model obtained by the nth reconstruction to obtain the clothing prior information after the nth adjustment, so as to perform fitting reconstruction on the three-dimensional model for the N +1 th time.
In some embodiments, model determination module 7022 comprises: a projection submodule (not shown in the figure) for performing two-dimensional projection on the three-dimensional clothing model obtained by the Nth reconstruction to obtain a projection image; a difference determination sub-module (not shown in the figures) for determining shape differences between the projected image and the apparel image; and the judgment sub-module (not shown in the figure) is used for determining that the three-dimensional clothing model obtained by the Nth reconstruction meets the reconstruction requirement if the shape difference between the projection image and the clothing image is smaller than the difference threshold, and otherwise determining that the three-dimensional clothing model obtained by the Nth reconstruction does not meet the reconstruction requirement.
In some embodiments, as shown in fig. 7, the three-dimensional virtual apparel generation apparatus 700 further comprises: a shape reference determining unit 704, configured to determine a target shape reference image corresponding to the clothing image; an image shape adjusting unit 705, configured to perform shape adjustment on the clothing image according to the target shape reference image.
In some embodiments, the shape reference determination unit 704 includes: a category determining module (not shown in the figure) for determining a clothing category to which the clothing image belongs; and an image determining module (not shown in the figure) for determining a target shape reference image in the clothing shape reference image according to the clothing category.
In some embodiments, the image acquisition unit 701 includes: an initial image obtaining module (not shown in the figure) for obtaining an initial image, wherein a target object in the initial image wears a real dress; and the image identification and segmentation module (not shown in the figure) is used for identifying and segmenting clothes of the initial image to obtain a clothes image.
In some embodiments, as shown in fig. 7, the three-dimensional virtual apparel generating apparatus 700 further comprises: a posture detection unit 706 for detecting posture information of the target object in the initial image; an image posture adjustment unit 707 for performing posture adjustment on the clothing image according to the posture information to correct the clothing posture in the clothing image to the target posture.
In some embodiments, the image pose adjustment unit 707 includes: an image alignment module (not shown in the figure) for performing image alignment on the clothes image based on the posture information; and the texture migration module (not shown in the figure) is used for performing texture migration on the clothes image after the image alignment based on the posture information to obtain the clothes image after the posture adjustment.
In some embodiments, as shown in fig. 7, the three-dimensional virtual apparel generating apparatus 700 further comprises: an image texture completion unit 708, configured to perform texture completion on the shielded area of the posture-adjusted clothing image; and a super-resolution reconstruction unit 709 configured to perform super-resolution reconstruction on the texture-supplemented clothing image.
In some embodiments, texture rendering unit 703 includes: a texture image generation module (not shown in the figure) for generating a texture image according to the clothing image and the target clothing model; and the texture image rendering module (not shown in the figure) is used for rendering the texture image on the target clothes model to obtain the three-dimensional virtual clothes.
In some embodiments, the texture image generation module comprises: a two-dimensional projection sub-module (not shown in the figure) for performing two-dimensional projection on the target clothes model to obtain a two-dimensional grid image; and the image affine transformation submodule (not shown in the figure) is used for carrying out affine transformation on the clothing image according to the corresponding relation between the pixel coordinates of the two-dimensional grid image and the texture coordinates of the target clothing model to obtain a texture image.
The three-dimensional virtual clothing generation device provided in fig. 6 to 7 may implement the corresponding method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
According to an embodiment of the present disclosure, the present disclosure also provides an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the aspects provided by any of the embodiments described above.
According to an embodiment of the present disclosure, there is also provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the solution provided by any of the above embodiments.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of the electronic device can read the computer program, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any of the embodiments described above.
Fig. 8 is a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the electronic device 800 includes a computing unit 801 that can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM), such as ROM 802, or loaded from a storage unit 808 into a Random Access Memory (RAM), such as RAM 803. In the RAM 803, various programs and data necessary for the operation of the electronic apparatus 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface (e.g., I/O interface 805) is also connected to bus 804.
A number of components in the electronic device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the electronic device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing Unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 performs the respective methods and processes described above, such as the three-dimensional virtual apparel generation method. For example, in some embodiments, the three-dimensional virtual apparel generation method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto the electronic device 800 via the ROM 802 and/or the communication unit 809. When loaded into RAM 803 and executed by computing unit 801, a computer program may perform one or more steps of the three-dimensional virtual apparel generation method described above. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the three-dimensional virtual apparel generation method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be realized in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Parts (ASSPs), system On a Chip (SOC), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer disk, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (Erasable 8194; programmable 8194; read 8194; only 8194; memory, EPROM, or flash Memory), an optical fiber, a Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server may be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service extensibility in a traditional physical host and VPS service ("Virtual Private Server", or "VPS" for short). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical aspects of the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (20)

1. A three-dimensional virtual apparel generation method, comprising:
acquiring a clothing image;
according to the clothing image and clothing prior information related to the clothing shape, three-dimensional model fitting reconstruction is carried out, and a target clothing model is generated;
performing texture rendering on the target clothing model according to the clothing image to generate a three-dimensional virtual clothing corresponding to the clothing image;
wherein, the acquiring of the clothing image comprises:
acquiring an initial image, wherein a target object in the initial image wears a real dress;
clothing recognition and segmentation are carried out on the initial image to obtain a clothing image;
further comprising:
detecting pose information of the target object in the initial image;
and according to the posture information, carrying out posture adjustment on the clothing image so as to correct the clothing posture in the clothing image to a target posture.
2. The three-dimensional virtual clothing generation method according to claim 1, wherein the clothing image is used for adjusting the clothing prior information, and the three-dimensional model fitting reconstruction is performed according to the clothing image and the clothing prior information related to the clothing shape to generate a target clothing model, including:
performing fitting reconstruction of the three-dimensional model for the Nth time according to the clothing prior information after the Nth-1-time adjustment to obtain a three-dimensional clothing model obtained by the Nth time reconstruction, wherein N is greater than or equal to 1;
determining whether the three-dimensional clothing model obtained by the Nth reconstruction meets the reconstruction requirement or not according to the clothing image;
if the three-dimensional clothing model obtained by the Nth reconstruction meets the reconstruction requirement, determining the target clothing model as the three-dimensional clothing model obtained by the Nth reconstruction;
if the three-dimensional clothing model obtained by the Nth reconstruction does not meet the reconstruction requirement, the clothing prior information after the N-1 th adjustment is adjusted based on the shape difference between the clothing image and the three-dimensional clothing model obtained by the Nth reconstruction to obtain the clothing prior information after the Nth adjustment, and N is added with one to carry out the next three-dimensional model fitting reconstruction.
3. The three-dimensional virtual clothing generation method according to claim 2, wherein the determining whether the three-dimensional clothing model obtained by the Nth reconstruction meets reconstruction requirements according to the clothing image comprises:
performing two-dimensional projection on the three-dimensional clothing model obtained by the Nth reconstruction to obtain a projection image;
determining a shape difference between the projected image and the apparel image;
and if the shape difference between the projection image and the clothing image is smaller than a difference threshold value, determining that the three-dimensional clothing model obtained by the Nth reconstruction meets the reconstruction requirement, otherwise determining that the three-dimensional clothing model obtained by the Nth reconstruction does not meet the reconstruction requirement.
4. The three-dimensional virtual apparel generation method of any of claims 1-3, further comprising:
determining a target shape reference image corresponding to the clothing image;
and according to the target shape reference image, carrying out shape adjustment on the clothing image.
5. The three-dimensional virtual apparel generation method of claim 4, wherein the determining a target shape reference image to which the apparel image corresponds comprises:
determining a clothing category to which the clothing image belongs;
determining the target shape reference image in a apparel shape reference image according to the apparel category.
6. The three-dimensional virtual apparel generation method of claim 1, wherein the pose adjustment of the apparel image according to the pose information comprises:
performing image alignment on the clothing image based on the posture information;
and based on the posture information, carrying out texture migration on the clothes image after the image alignment to obtain the clothes image after the posture adjustment.
7. The three-dimensional virtual apparel generation method of claim 6, further comprising:
performing texture completion on the shielded area of the clothing image after the posture adjustment;
and performing super-resolution reconstruction on the clothing image after the texture completion.
8. The three-dimensional virtual garment generation method of any of claims 1-3, wherein the texture rendering of the target garment model from the garment image, generating a three-dimensional virtual garment corresponding to the garment image, comprises:
generating a texture image according to the clothing image and the target clothing model;
rendering the texture image on the target clothes model to obtain the three-dimensional virtual clothes.
9. The three-dimensional virtual apparel generation method of claim 8, wherein the generating a texture image from the apparel image and the target apparel model comprises:
performing two-dimensional projection on the target clothing model to obtain a two-dimensional grid image;
and carrying out affine transformation on the clothing image according to the corresponding relation between the pixel coordinates of the two-dimensional grid image and the texture coordinates of the target clothing model to obtain the texture image.
10. A three-dimensional virtual apparel generation apparatus, comprising:
the image acquisition unit is used for acquiring a clothing image;
the three-dimensional model building unit is used for carrying out three-dimensional model fitting reconstruction according to the clothing image and clothing prior information related to the clothing shape to generate a target clothing model;
the texture rendering unit is used for performing texture rendering on the target clothing model according to the clothing image and generating a three-dimensional virtual clothing corresponding to the clothing image;
wherein the acquisition unit includes:
the system comprises an initial image acquisition module, a display module and a display module, wherein the initial image acquisition module is used for acquiring an initial image, and a target object in the initial image wears a real dress;
the image identification and segmentation module is used for identifying and segmenting clothing on the initial image to obtain the clothing image;
further comprising:
an attitude detection unit configured to detect attitude information of the target object in the initial image;
and the image posture adjusting unit is used for carrying out posture adjustment on the clothing image according to the posture information so as to correct the clothing posture in the clothing image to a target posture.
11. The three-dimensional virtual apparel generation apparatus of claim 10, wherein the apparel image is used to adjust the apparel prior information, the three-dimensional model construction unit comprising:
the model reconstruction module is used for performing fitting reconstruction of the three-dimensional model for the Nth time according to the clothing prior information after the adjustment for the Nth time to obtain the three-dimensional clothing model obtained by the Nth time reconstruction, wherein N is more than or equal to 1;
the model judgment module is used for determining whether the three-dimensional clothing model obtained by the Nth reconstruction meets the reconstruction requirement or not according to the clothing image;
the model determining module is used for determining the target clothing model to be the three-dimensional clothing model obtained by the Nth reconstruction if the three-dimensional clothing model obtained by the Nth reconstruction meets the reconstruction requirement;
and the information adjusting module is used for adjusting the clothing prior information after the N-1 th adjustment based on the shape difference between the clothing image and the three-dimensional clothing model obtained by the Nth reconstruction if the three-dimensional clothing model obtained by the Nth reconstruction does not meet the reconstruction requirement to obtain the clothing prior information after the Nth adjustment so as to perform fitting reconstruction of the three-dimensional model for the (N + 1) th time.
12. The three-dimensional virtual apparel generating apparatus of claim 11, wherein the model determining module comprises:
the projection submodule is used for carrying out two-dimensional projection on the three-dimensional clothes model obtained by the Nth reconstruction to obtain a projection image;
a difference determination sub-module for determining a shape difference between the projected image and the apparel image;
and the judging submodule is used for determining that the three-dimensional clothes model obtained by the Nth reconstruction meets the reconstruction requirement if the shape difference between the projection image and the clothes image is smaller than a difference threshold value, and otherwise determining that the three-dimensional clothes model obtained by the Nth reconstruction does not meet the reconstruction requirement.
13. The three-dimensional virtual apparel generation apparatus of any of claims 10-12, further comprising:
the shape reference determining unit is used for determining a target shape reference image corresponding to the clothing image;
and the image shape adjusting unit is used for adjusting the shape of the clothing image according to the target shape reference image.
14. The three-dimensional virtual apparel generation apparatus of claim 13, wherein the shape reference determination unit comprises:
the category determining module is used for determining the clothing category to which the clothing image belongs;
and the image determining module is used for determining the target shape reference image in the clothing shape reference image according to the clothing category.
15. The three-dimensional virtual apparel generating apparatus of claim 10, wherein the image pose adjustment unit comprises:
the image alignment module is used for carrying out image alignment on the clothes image based on the posture information;
and the texture migration module is used for carrying out texture migration on the clothes image after the image alignment based on the posture information to obtain the clothes image after the posture adjustment.
16. The three-dimensional virtual apparel generation apparatus of claim 15, further comprising:
the image texture completion unit is used for performing texture completion on the shielded area of the clothes image after the posture adjustment;
and the super-resolution reconstruction unit is used for performing super-resolution reconstruction on the clothing image after the texture completion.
17. The three-dimensional virtual garment generation apparatus according to any one of claims 10-12, wherein the texture rendering unit includes:
the texture image generation module is used for generating a texture image according to the clothing image and the target clothing model;
and the texture image rendering module is used for rendering the texture image on the target clothes model to obtain the three-dimensional virtual clothes.
18. The three-dimensional virtual apparel generating apparatus of claim 17, wherein the texture image generating module comprises:
the two-dimensional projection sub-module is used for performing two-dimensional projection on the target clothing model to obtain a two-dimensional grid image;
and the image affine transformation submodule is used for carrying out affine transformation on the clothing image according to the corresponding relation between the pixel coordinates of the two-dimensional grid image and the texture coordinates of the target clothing model to obtain the texture image.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the three-dimensional virtual apparel generation method of any of claims 1-9.
20. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the three-dimensional virtual apparel generation method of any of claims 1-9.
CN202211290183.6A 2022-10-21 2022-10-21 Three-dimensional virtual clothing generation method, device, equipment and storage medium Active CN115375823B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211290183.6A CN115375823B (en) 2022-10-21 2022-10-21 Three-dimensional virtual clothing generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211290183.6A CN115375823B (en) 2022-10-21 2022-10-21 Three-dimensional virtual clothing generation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115375823A CN115375823A (en) 2022-11-22
CN115375823B true CN115375823B (en) 2023-01-31

Family

ID=84072585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211290183.6A Active CN115375823B (en) 2022-10-21 2022-10-21 Three-dimensional virtual clothing generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115375823B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116112699B (en) * 2022-12-13 2024-07-19 北京奇艺世纪科技有限公司 Live broadcast method and device, electronic equipment and readable storage medium
CN116843833B (en) * 2023-06-30 2024-09-13 北京百度网讯科技有限公司 Three-dimensional model generation method and device and electronic equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10891789B2 (en) * 2019-05-30 2021-01-12 Itseez3D, Inc. Method to produce 3D model from one or several images
CN110309554B (en) * 2019-06-12 2021-01-15 清华大学 Video human body three-dimensional reconstruction method and device based on garment modeling and simulation
CN114119908A (en) * 2020-08-27 2022-03-01 北京陌陌信息技术有限公司 Clothing model driving method, equipment and storage medium
CN112396703B (en) * 2020-11-18 2024-01-12 北京工商大学 Reconstruction method of single-image three-dimensional point cloud model
CN113593001A (en) * 2021-02-07 2021-11-02 大连理工大学 Target object three-dimensional reconstruction method and device, computer equipment and storage medium
CN113096249B (en) * 2021-03-30 2023-02-17 Oppo广东移动通信有限公司 Method for training vertex reconstruction model, image reconstruction method and electronic equipment
CN115082640B (en) * 2022-08-01 2024-09-20 聚好看科技股份有限公司 3D face model texture reconstruction method and device based on single image

Also Published As

Publication number Publication date
CN115375823A (en) 2022-11-22

Similar Documents

Publication Publication Date Title
CN115375823B (en) Three-dimensional virtual clothing generation method, device, equipment and storage medium
CN110889890B (en) Image processing method and device, processor, electronic equipment and storage medium
CN113724368B (en) Image acquisition system, three-dimensional reconstruction method, device, equipment and storage medium
CN115409933B (en) Multi-style texture mapping generation method and device
CN111583381B (en) Game resource map rendering method and device and electronic equipment
CN113129450A (en) Virtual fitting method, device, electronic equipment and medium
CN114723888B (en) Three-dimensional hair model generation method, device, equipment, storage medium and product
CN115018992B (en) Method and device for generating hair style model, electronic equipment and storage medium
CN115330940B (en) Three-dimensional reconstruction method, device, equipment and medium
CN114792355B (en) Virtual image generation method and device, electronic equipment and storage medium
CN114067051A (en) Three-dimensional reconstruction processing method, device, electronic device and storage medium
CN113962845A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN115375847B (en) Material recovery method, three-dimensional model generation method and model training method
CN115761123B (en) Three-dimensional model processing method, three-dimensional model processing device, electronic equipment and storage medium
CN115222895B (en) Image generation method, device, equipment and storage medium
CN111652807A (en) Eye adjustment method, eye live broadcast method, eye adjustment device, eye live broadcast device, electronic equipment and storage medium
CN115409951A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115375740A (en) Pose determination method, three-dimensional model generation method, device, equipment and medium
CN113781653A (en) Object model generation method and device, electronic equipment and storage medium
CN114820908B (en) Virtual image generation method and device, electronic equipment and storage medium
CN115953553B (en) Avatar generation method, apparatus, electronic device, and storage medium
CN114708399B (en) Three-dimensional reconstruction method, device, equipment, medium and product
CN116012666B (en) Image generation, model training and information reconstruction methods and devices and electronic equipment
CN115147578B (en) Stylized three-dimensional face generation method and device, electronic equipment and storage medium
CN115439331B (en) Corner correction method and generation method and device of three-dimensional model in meta universe

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant