CN115147508A - Method and device for training clothing generation model and method and device for generating clothing image - Google Patents

Method and device for training clothing generation model and method and device for generating clothing image Download PDF

Info

Publication number
CN115147508A
CN115147508A CN202210769171.5A CN202210769171A CN115147508A CN 115147508 A CN115147508 A CN 115147508A CN 202210769171 A CN202210769171 A CN 202210769171A CN 115147508 A CN115147508 A CN 115147508A
Authority
CN
China
Prior art keywords
image
texture
shape
encoder
clothing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210769171.5A
Other languages
Chinese (zh)
Other versions
CN115147508B (en
Inventor
杨少雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210769171.5A priority Critical patent/CN115147508B/en
Publication of CN115147508A publication Critical patent/CN115147508A/en
Application granted granted Critical
Publication of CN115147508B publication Critical patent/CN115147508B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a method and an apparatus for training a clothing generation model and generating clothing images, which relate to the technical field of artificial intelligence, specifically to the technical fields of Augmented Reality (AR), virtual reality, computer vision, deep learning, etc., and can be applied to scenes such as the meta-universe, etc. The specific implementation scheme is as follows: inputting the shape mask image and the texture image into a shape encoder and a texture encoder respectively to obtain shape features and texture features of a preset number of layers; correspondingly adding the shape characteristics of each layer and the texture characteristics of each layer to obtain fusion characteristics of a preset number of layers, inputting the fusion characteristics into a pre-training model, and outputting a virtual clothing image; adjusting relevant parameters of a shape encoder and a texture encoder based on a difference between the original image and the virtual apparel image; and obtaining a clothing generation model based on the adjusted shape encoder, the adjusted texture encoder and the pre-training model. The embodiment can obtain a model which can generate a dress image of a specified style.

Description

Method and device for training clothing generation model and method and device for generating clothing image
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical fields of augmented reality AR, virtual reality, computer vision, deep learning and the like, can be applied to scenes such as the meta universe and the like, and particularly relates to a method and a device for training a clothing generation model and generating clothing images.
Background
In recent years, with the rapid development of computer technology, image processing technology is applied to various aspects. For example, the cartoon avatar garment is personalized. The 2D cartoon avatar clothes part needs to be generated according to the shot real person photo, the generated clothes are required to meet the shape of a given template, and the generated clothes have higher similarity with the original photo clothes.
The shape and the texture of the clothing image generated by the related technology are not controlled, and the high-similarity reconstruction of the clothing image with the shape and the texture of a specific style cannot be realized.
Disclosure of Invention
The present disclosure provides a method, apparatus, device, storage medium, and computer program product for training a clothing generation model, generating an image of clothing.
According to a first aspect of the present disclosure, there is provided a method of training of a clothing generation model, comprising: obtaining a sample image of the clothes, wherein the sample image comprises an original image, a shape mask image and a texture image; inputting the shape mask image into a shape encoder to obtain shape features with a preset number of layers; inputting the texture image into a texture encoder to obtain texture features of a preset number of layers; correspondingly adding the shape characteristics of each layer and the texture characteristics of each layer to obtain fusion characteristics of a preset number of layers, inputting the fusion characteristics into a pre-training model, and outputting a virtual clothing image; adjusting relevant parameters of the shape encoder and the texture encoder based on a difference between the original image and the virtual apparel image; and obtaining a clothing generation model based on the adjusted shape encoder, the adjusted texture encoder and the pre-training model.
According to a second aspect of the present disclosure, there is provided a method of generating an image of a garment, comprising: acquiring a shape image and a texture image of a garment in a specified style; inputting the shape image and the texture image into a clothing generation model generated by the method of the first aspect, and generating a clothing image of a specified style.
According to a third aspect of the present disclosure, there is provided a training apparatus for a clothing generation model, comprising: an acquisition unit configured to acquire a sample image of a garment, wherein the sample includes an original image, a shape mask image, and a texture image; a shape encoding unit configured to input the shape mask image to a shape encoder, resulting in a predetermined number of layers of shape features; a texture coding unit configured to input the texture image into a texture encoder, resulting in a predetermined number of layers of texture features; the fusion unit is configured to correspondingly add the shape characteristics of each layer and the texture characteristics of each layer to obtain fusion characteristics of a preset number of layers, then input the fusion characteristics into the pre-training model and output a virtual clothing image; an adjusting unit configured to adjust relevant parameters of the shape encoder and the texture encoder based on a difference between the original image and the virtual dress image; an output unit configured to obtain a clothing generation model based on the adjusted shape encoder, texture encoder and the pre-training model.
According to a fourth aspect of the present disclosure, there is provided an apparatus for generating an image of a garment, comprising: an acquisition unit configured to acquire a shape image and a texture image of a garment of a specified style; a generating unit configured to input the shape image and the texture image into a clothing generation model generated by the apparatus according to the third aspect, and generate a clothing image of a specified style.
According to a fifth aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the first and second aspects.
According to a sixth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions, wherein the computer instructions are for causing the computer to perform the method of any one of the first and second aspects.
According to a seventh aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of any one of the first and second aspects.
The application provides a 2D clothing image generation technology based on independent coding of shapes and textures, the given clothing style shape and the texture image can reconstruct a high-quality and tidy clothing image with the given style shape and high similarity to the input texture, and the 2D clothing image with controllable clothing style shape and texture is edited and generated. The techniques presented herein can be used for high quality reconstruction generation of garment parts (short sleeves, long sleeves, trousers, shorts, skirts, etc.) of 2D avatars, as well as batch 2D garment design and creation. In addition, the technology provided by the invention can be used in a 2D virtual fitting solution, can be used for putting on clothes on a model body with any posture, and has wide application scenes and commercial value.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is an exemplary system architecture diagram to which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a training method for a clothing generation model according to the present application;
FIG. 3 is a schematic diagram of an application scenario of a training method of a clothing generation model according to the application;
FIG. 4 is a flow diagram of one embodiment of a method of generating an image of apparel in accordance with the present application;
FIG. 5 is a schematic diagram of an embodiment of a training apparatus for a clothing generation model according to the present application;
FIG. 6 is a schematic diagram of the structure of one embodiment of an apparatus for generating an image of apparel in accordance with the present application;
figure 7 is a block diagram of an electronic device for a training method of a apparel generation model and a method of generating apparel images according to an embodiment of the application.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of embodiments of the present disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 illustrates an exemplary system architecture 100 to which a training method of a garment generation model, a training apparatus of a garment generation model, a method of generating a garment image, or an apparatus of generating a garment image of embodiments of the application may be applied.
As shown in fig. 1, the system architecture 100 may include terminals 101, 102, a network 103, a database server 104, and a server 105. The network 103 serves as a medium for providing communication links between the terminals 101, 102, the database server 104 and the server 105. Network 103 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user 110 may use the terminals 101, 102 to interact with the server 105 over the network 103 to receive or send messages or the like. The terminals 101 and 102 may have various client applications installed thereon, such as a model training application, a clothing image editing application, a virtual fitting application, a shopping application, a payment application, a web browser, an instant messenger, and the like.
Here, the terminals 101 and 102 may be hardware or software. When the terminals 101 and 102 are hardware, they may be various electronic devices with display screens, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III), laptop portable computers, desktop computers, and the like. When the terminals 101 and 102 are software, they can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
When the terminals 101, 102 are hardware, an image capturing device may be mounted thereon. The image acquisition device can be various devices capable of realizing the function of acquiring images, such as a camera, a sensor and the like. The user 110 may capture some apparel images using an image capture device on the terminal 101, 102.
Database server 104 may be a provisioning server database servers for various services. For example, a database server may have a sample set stored therein. The sample set contains a large number of samples. The samples may include, among other things, an original image, a shape mask image, and a texture image. In this way, the user 110 may also select samples from a set of samples stored by the database server 104 via the terminals 101, 102.
The server 105 may also be a server providing various services, such as a background server providing support for various applications displayed on the terminals 101, 102. The background server may train the initial model using the samples in the sample set sent by the terminals 101 and 102, and may send the training result (e.g., the generated clothing generation model) to the terminals 101 and 102. In this way, the user can apply the generated clothing generation model to design clothing, and generate a clothing image with a specified shape and texture.
Here, the database server 104 and the server 105 may be hardware or software. When they are hardware, they can be implemented as a distributed server cluster composed of a plurality of servers, or as a single server. When they are software, they may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the training method of the clothing generation model or the method of generating the clothing image provided in the embodiments of the present application are generally executed by the server 105. Accordingly, the training device for generating the model of the clothing or the device for generating the image of the clothing is generally disposed in the server 105.
It is noted that database server 104 may not be provided in system architecture 100, as server 105 may perform the relevant functions of database server 104.
It should be understood that the number of terminals, networks, database servers, and servers in fig. 1 are merely illustrative. There may be any number of terminals, networks, database servers, and servers, as desired for an implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a training method for a clothing generation model according to the present application is shown. The training method of the clothing generation model comprises the following steps:
step 201, a sample image of the clothing is obtained.
In this embodiment, an executive (e.g., the server shown in fig. 1) of the training method of the clothing generation model may acquire the sample image set in various ways. For example, the executive may obtain the existing sample image set stored therein from a database server (e.g., database server 104 shown in fig. 1) via a wired connection or a wireless connection. As another example, a user may collect a sample via a terminal (e.g., terminals 101, 102 shown in FIG. 1). In this way, the executive may receive sample images collected by the terminal and store them locally, thereby generating a sample image set.
Here, the sample image set may include at least one sample image. Wherein the sample image includes an original image, a shape mask image, and a texture image. Sample images are selected from the sample image set, and steps 202-206 are performed, wherein the selection manner and the number of the sample images are not limited in the present application. For example, at least one sample image may be selected randomly, or a sample image with better sharpness (i.e., higher pixels) may be selected from the sample images.
The original image is a color image that includes the apparel. The shape mask image is a black and white image of the outline of the garment, also referred to as the shape mask. The clothing shape mask can be extracted from the original image through algorithms such as image semantic segmentation and the like. The texture image is a color image including the texture and color of the apparel, such as a white background blue flower texture image. The texture image can be a clothing fragment cut out from the original image at will, and the shape of the texture image can be any shape. The texture image can also be an image obtained by randomly shielding the original image, and is used for simulating the texture of the clothes shielded by the arms of the user to form a defective texture image.
Step 202, inputting the shape mask image into a shape encoder to obtain shape features with a predetermined number of layers.
In this embodiment, the shape encoder is a convolutional neural network for extracting shape features of the image. Each layer of the shape encoder outputs shape features.
Step 203, inputting the texture image into a texture encoder to obtain texture features of a predetermined number of layers.
In this embodiment, the texture encoder is a convolutional neural network, which is used to extract texture features of the image. Each layer of the texture encoder outputs texture features. The number of layers of the texture encoder and the shape encoder is the same.
And 204, correspondingly adding the shape characteristics of each layer and the texture characteristics of each layer to obtain fusion characteristics of a preset number of layers, inputting the fusion characteristics into a pre-training model, and outputting a virtual clothing image.
In this embodiment, the number of layers of the texture encoder and the shape encoder is the same, the texture feature output by the first layer of the texture encoder and the shape feature output by the first layer of the shape encoder are added to obtain the first layer fusion feature, and similarly, the texture feature output by the nth layer of the texture encoder and the shape feature output by the nth layer of the shape encoder are added to obtain the nth layer fusion feature. And inputting the obtained N layers of fusion characteristics into a pre-training model. The pre-training model is used to generate a virtual apparel image. The pre-training model may be a generator of GAN. The training process of the pre-training model is as follows: acquiring a dress true image (dress GT) as a true value label; randomly generating some vectors, inputting the vectors into a generator, outputting a predicted image, judging whether the predicted image and the clothing true image are true or false by a discriminator, alternately adjusting parameters of the generator and the discriminator, and finally obtaining the generator as a pre-training model.
Step 205, adjusting relevant parameters of the shape encoder and the texture encoder based on the difference between the original image and the virtual dress image.
In this embodiment, the difference between the original image and the virtual clothing image of the sample may be calculated in various ways, and the similarity may be calculated by a common method of calculating image similarity. And if the similarity is smaller than the preset similarity threshold, adjusting relevant parameters of the shape encoder and the texture encoder to enable the difference between the original image and the virtual clothes image to be smaller until the similarity converges to the preset similarity threshold.
Wherein the weights of the pre-trained model (the generator of the GAN) are frozen during the phase of training, i.e. not involved in the phase of training, but only used for training the shape encoder and the texture generator
And step 206, obtaining a clothing generation model based on the adjusted shape encoder, texture encoder and pre-training model.
In this embodiment, if the difference between the original image of the sample and the virtual clothing image is smaller than the predetermined value, or the similarity is greater than the predetermined similarity threshold, it indicates that the training of the shape encoder and the texture encoder is completed. The clothing image generation model can be formed together with the pre-training model. The apparel image generation model may be published to a server or terminal device.
The embodiment provided by the application provides a clothing shape and clothing texture independent coding part structure, designs a set of loss functions to supervise and train the generated clothing shape and texture, and can generate a clothing generation model with specified shape and texture, so that a customized clothing image can be generated according to the requirements of a user, and the clothing generation model can be applied to scenes such as virtual fitting.
In some optional implementations of this embodiment, adjusting the relevant parameters of the shape encoder and the texture encoder based on the difference between the original image of the sample and the virtual apparel image comprises: calculating a 2D distance loss between the original image of the sample and the virtual apparel image; calculating a perceptual loss between the original image of the sample and the virtual apparel image; taking a weighted sum of the 2D distance penalty and the perceptual penalty as a first penalty value; and if the first loss value is larger than or equal to a first preset threshold value, adjusting relevant parameters of the shape encoder and the texture encoder. The 2D distance loss is the L2 loss. Learning perceived Image block Similarity (LPIPS), also known as "Perceptual loss", is used to measure the difference between two images. The accuracy of calculating the image difference anisotropy can be improved by introducing the perception loss, so that a more accurate clothing image generation model is generated.
In some optional implementations of this embodiment, the method further includes: performing image semantic segmentation on the virtual clothing image to obtain a segmentation mask image; calculating a distance L1 loss between the segmentation mask map and the shape mask image of the sample; taking a weighted sum of the distance L1 penalty, the 2D distance penalty, and the perceptual penalty as a second penalty value; and if the second loss value is greater than or equal to a second preset threshold value, adjusting the relevant parameters of the shape encoder and the texture encoder. Image Semantic Segmentation (Semantic Segmentation) is an important part of image processing and understanding about images in machine vision technology, and is also an important branch in the AI field. The semantic segmentation is to classify each pixel point in the image, determine the category (such as belonging to the background, people or vehicles) of each point, and thus perform region division. The virtual clothing image can be subjected to image semantic segmentation through a common semantic segmentation model in the prior art, and the obtained segmentation mask image can display the clothing outline but can not display the color and the texture. The distance L1 loss here is a shape loss, and thus the accuracy of the shape and texture of the garment generated by the trained model can be improved.
In some optional implementations of this embodiment, the obtaining a sample of an image of a piece of apparel includes: acquiring an original image of a clothing image; performing image semantic segmentation on the original image to obtain a shape mask image; carrying out random shielding on the original image to obtain a texture image; and combining the original image, the corresponding shape mask image and the corresponding texture image into a sample. The texture image obtained in this way is a defective texture image, so the generated clothing image generation model can repair the defective texture to obtain a complete clothing image, for example, if the texture image in the training sample is covered by a part of buttons (for example, 5 buttons in total, and 2 buttons in total) by the arm, then the clothing image of all buttons can be generated by the clothing image generation model.
In some optional implementations of this embodiment, the pre-training model is a generator for generating a competing network, and the shape encoder and the texture encoder are convolutional neural networks with the same number of layers. The generator in StyleGAN can be employed as a pre-trained model. The network structure is convenient to construct and train and has good performance.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for training of a clothing generation model according to the present embodiment. In the application scenario of fig. 3, the apparel image generation model includes a shape encoder, a texture encoder, and a generator. The generator is trained in a clothing data set based on a StyleGAN network to obtain a pre-training model. The specific training process is as follows:
1. firstly, collecting a large amount of 2D clothes image data, and carrying out scale alignment processing;
2. then, clothing image segmentation is carried out on the aligned 2D clothing images, and clothing shape masks are extracted to obtain shape Mask images;
3. training in the first stage: training a StyleGAN clothing image generator (512 × 512 resolution) based on a large number of aligned clothing pictures to obtain a pre-training model;
4. and (3) training in the second stage: adding a shape encoder and a texture encoder, adding corresponding layers of outputs of the two encoders (18 layers of outputs are added in a one-to-one correspondence manner) to serve as input of a StyleGAN generator, and performing end-to-end training (wherein the weight of the StyleGAN generator is frozen in the training of the stage, namely the StyleGAN generator does not participate in the training of the stage and is only used for training a dress shape encoder and a texture generator);
5. the first stage training loss function is a corresponding GAN loss function of a general StyleGAN, in the second stage training, the loss functions comprise three loss functions, wherein one loss function is the shape loss (the loss of a Mask distance L1 obtained by inputting a Mask and generating a clothing image and dividing the image), the second loss function is the pixel-level loss (the loss of a 2D distance between an original clothing image and a generated clothing image), and the third loss function is the LPIPS clothing perception loss (the loss of the LPIPS perception distance between the original clothing image and the generated clothing image).
6. After the second stage training is finished, inputting a target shape Mask and partial incomplete clothing texture during prediction, and obtaining a 2D clothing image which is consistent with the input shape and has similar texture to the input texture.
With continued reference to FIG. 4, a flow 400 of yet another embodiment of a method of generating an image of apparel in accordance with the present application is shown. The method for generating the clothing image can comprise the following steps:
step 401, obtaining a shape image and a texture image of a garment of a specified style.
In the present embodiment, the execution subject (e.g., the server 105 shown in fig. 1) of the method of generating a clothing image may specify the shape image and the texture image of the clothing of the style in various ways. For example, the execution subject may obtain the shape image and the texture image of the apparel of the specified style stored therein from a database server (e.g., database server 104 shown in fig. 1) through a wired connection or a wireless connection. As another example, the executing entity may also receive a shape image and a texture image of a given style of apparel captured by a terminal (e.g., terminals 101, 102 shown in fig. 1) or other device. For example, a texture image of a short-sleeved T-shirt having a pattern of a long-sleeved coat and a yellow asteroid pattern is specified.
Step 402, inputting the shape image and the texture image into a clothing generation model to generate a clothing image with a specified style.
In this embodiment, the executing subject may input the image acquired in step 401 into a clothing generation model, thereby generating a clothing image of a specified style, for example, a long-sleeve windcheat of a yellow little-star pattern.
In this embodiment, the apparel generation model may be generated using the method described above in the embodiment of fig. 2. For a specific generation process, reference may be made to the related description of the embodiment in fig. 2, which is not described herein again.
It should be noted that the method for generating a clothing image in this embodiment may be used to test the clothing generation model generated in each of the above embodiments. And then the clothing generation model can be continuously optimized according to the generated clothing image. The method may also be a practical application method of the clothing generation model generated by the above embodiments. By generating a clothing image using the clothing generation model generated in each of the above embodiments, a clothing image of a predetermined shape and texture can be generated.
With continued reference to FIG. 5, the present application provides one embodiment of an apparatus for training a clothing generation model as an implementation of the method illustrated in FIG. 2 above. The embodiment of the device corresponds to the embodiment of the method shown in fig. 2, and the device can be applied to various electronic devices.
As shown in fig. 5, the apparatus 500 for training of a clothing generation model of the present embodiment may include: an acquisition unit 501, a shape encoding unit 502, a texture encoding unit 503, a fusion unit 504, an adjustment unit 505, and an output unit 506. The acquiring unit 501 is configured to acquire a sample image of a garment, where the sample image includes an original image, a shape mask image, and a texture image; a shape encoding unit 502 configured to input the shape mask image to a shape encoder, resulting in a predetermined number of layers of shape features; a texture encoding unit 503 configured to input the texture image into a texture encoder, so as to obtain a predetermined number of layers of texture features; a fusion unit 504 configured to add the shape features of each layer and the texture features of each layer correspondingly, obtain fusion features of a predetermined number of layers, input the fusion features into a pre-training model, and output a virtual clothing image; an adjusting unit 505 configured to adjust relevant parameters of the shape encoder and the texture encoder based on a difference between the original image and the virtual clothing image; an output unit 506 configured to obtain a clothing generation model based on the adjusted shape encoder, texture encoder and the pre-training model.
In some optional implementations of this embodiment, the adjusting unit 505 is further configured to: calculating a 2D distance loss between the original image and the virtual apparel image; calculating a perceptual loss between the original image and the virtual apparel image; taking a weighted sum of the 2D distance penalty and the perceptual penalty as a first penalty value; and if the first loss value is larger than or equal to a first preset threshold value, adjusting relevant parameters of the shape encoder and the texture encoder.
In some optional implementations of this embodiment, the adjusting unit 505 is further configured to: performing image semantic segmentation on the virtual clothing image to obtain a segmentation mask image; calculating a distance L1 penalty between the segmentation mask map and the shape mask image; taking a weighted sum of the distance L1 penalty, the 2D distance penalty, and the perceptual penalty as a second penalty value; and if the second loss value is larger than or equal to a second preset threshold value, adjusting the relevant parameters of the shape encoder and the texture encoder.
In some optional implementations of this embodiment, the obtaining unit 501 is further configured to: acquiring an original image of the clothes; performing image semantic segmentation on the original image to obtain a shape mask image; randomly shielding the original image to obtain a texture image; and combining the original image, the corresponding shape mask image and the texture image into a sample image.
In some optional implementations of this embodiment, the pre-training model is a generator for generating a competing network, and the shape encoder and the texture encoder are convolutional neural networks with the same number of layers.
With continued reference to FIG. 6, the present application provides one embodiment of an apparatus for generating an image of apparel as an implementation of the method illustrated in FIG. 4 above. The embodiment of the device corresponds to the embodiment of the method shown in fig. 4, and the device can be applied to various electronic devices.
As shown in fig. 6, the apparatus 600 for generating a clothing image of the present embodiment may include: an acquisition unit 601 and a generation unit 602. Wherein the acquisition unit 601 is configured to acquire a shape image and a texture image of a garment of a specified style; the generating unit 602 is configured to input the shape image and the texture image into a clothing generation model generated by the apparatus 500, and generate a clothing image of a specified style.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of flows 200 or 400.
A non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of flows 200 or 400.
A computer program product comprising a computer program which, when executed by a processor, implements the method of flow 200 or 400.
FIG. 7 shows a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the device 700 comprises a computing unit 701, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data required for the operation of the device 700 can also be stored. The calculation unit 701, the ROM 702, and the RAM703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
A number of components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 701 performs the respective methods and processes described above, such as a training method of a clothing generation model. For example, in some embodiments, the training method of the apparel generation model may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 708. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 700 via ROM 702 and/or communications unit 709. When the computer program is loaded into RAM703 and executed by the computing unit 701, one or more steps of the training method of the apparel generation model described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured by any other suitable means (e.g., by means of firmware) to perform the training method of the apparel generation model.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, causes the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server combining a blockchain.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (15)

1. A training method for a clothing generative model, comprising:
obtaining a sample image of the clothes, wherein the sample image comprises an original image, a shape mask image and a texture image;
inputting the shape mask image into a shape encoder to obtain shape features with preset layer number;
inputting the texture image into a texture encoder to obtain texture features with a preset number of layers;
correspondingly adding the shape characteristics of each layer and the texture characteristics of each layer to obtain fusion characteristics of a preset number of layers, inputting the fusion characteristics into a pre-training model, and outputting a virtual clothing image;
adjusting relevant parameters of the shape encoder and the texture encoder based on a difference between the original image and the virtual apparel image;
and obtaining a clothing generation model based on the adjusted shape encoder, the adjusted texture encoder and the pre-training model.
2. The method of claim 1, wherein the adjusting the relevant parameters of the shape encoder and the texture encoder based on the difference between the original image and the virtual dress image comprises:
calculating a 2D two-dimensional distance loss between the original image and the virtual apparel image;
calculating a perception loss between the original image and the virtual apparel image;
taking a weighted sum of the 2D distance penalty and the perceptual penalty as a first penalty value;
and if the first loss value is greater than or equal to a first preset threshold value, adjusting relevant parameters of the shape encoder and the texture encoder.
3. The method of claim 2, further comprising:
performing image semantic segmentation on the virtual clothing image to obtain a segmentation mask image;
calculating a distance L1 penalty between the segmentation mask map and the shape mask image;
taking a weighted sum of the distance L1 penalty, the 2D distance penalty, and the perceptual penalty as a second penalty value;
and if the second loss value is greater than or equal to a second preset threshold value, adjusting the relevant parameters of the shape encoder and the texture encoder.
4. The method of claim 1, wherein said obtaining a sample image of a garment comprises:
acquiring an original image of the clothes;
performing image semantic segmentation on the original image to obtain a shape mask image;
randomly shielding the original image to obtain a texture image;
and combining the original image, the corresponding shape mask image and the texture image into a sample image.
5. The method of any one of claims 1-4, wherein the pre-trained model is a generator of a generative confrontation network, and wherein the shape encoder and the texture encoder are convolutional neural networks having the same number of layers.
6. A method of generating an image of apparel, comprising:
acquiring a shape image and a texture image of a garment in a specified style;
inputting the shape image and the texture image into a clothing generation model generated by the method according to any one of claims 1 to 5, and generating a clothing image of a specified style.
7. A training apparatus for a clothing generation model, comprising:
an acquisition unit configured to acquire a sample image of a garment, wherein the sample image includes an original image, a shape mask image, and a texture image;
a shape encoding unit configured to input the shape mask image to a shape encoder, resulting in a predetermined number of layers of shape features;
a texture coding unit configured to input the texture image into a texture encoder, resulting in a predetermined number of layers of texture features;
the fusion unit is configured to correspondingly add the shape characteristics of each layer and the texture characteristics of each layer to obtain fusion characteristics of a preset number of layers, then input the fusion characteristics into the pre-training model and output a virtual clothing image;
an adjusting unit configured to adjust relevant parameters of the shape encoder and the texture encoder based on a difference between the original image and the virtual dress image;
an output unit configured to obtain a clothing generation model based on the adjusted shape encoder, texture encoder and the pre-training model.
8. The apparatus of claim 7, wherein the adjustment unit is further configured to:
calculating a 2D distance loss between the original image and the virtual apparel image;
calculating a perception loss between the original image and the virtual apparel image;
taking a weighted sum of the 2D distance penalty and the perceptual penalty as a first penalty value;
and if the first loss value is larger than or equal to a first preset threshold value, adjusting relevant parameters of the shape encoder and the texture encoder.
9. The apparatus of claim 8, wherein the adjustment unit is further configured to:
performing image semantic segmentation on the virtual clothing image to obtain a segmentation mask image;
calculating a distance L1 loss between the segmentation mask map and the shape mask image;
taking a weighted sum of the distance L1 penalty, the 2D distance penalty, and the perceptual penalty as a second penalty value;
and if the second loss value is larger than or equal to a second preset threshold value, adjusting the relevant parameters of the shape encoder and the texture encoder.
10. The apparatus of claim 7, wherein the obtaining unit is further configured to:
acquiring an original image of the clothes;
performing image semantic segmentation on the original image to obtain a shape mask image;
carrying out random shielding on the original image to obtain a texture image;
and combining the original image, the corresponding shape mask image and the corresponding texture image into a sample image.
11. The apparatus of any one of claims 7-10, wherein the pre-trained model is a generator of a generative confrontational network, and wherein the shape encoder and the texture encoder are convolutional neural networks with the same number of layers.
12. An apparatus to generate an image of a garment, comprising:
an acquisition unit configured to acquire a shape image and a texture image of a garment of a specified style;
a generating unit configured to input the shape image and the texture image into a clothing generation model generated by the apparatus according to any one of claims 7 to 11, and generate a clothing image of a specified style.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-6.
CN202210769171.5A 2022-06-30 2022-06-30 Training of clothing generation model and method and device for generating clothing image Active CN115147508B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210769171.5A CN115147508B (en) 2022-06-30 2022-06-30 Training of clothing generation model and method and device for generating clothing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210769171.5A CN115147508B (en) 2022-06-30 2022-06-30 Training of clothing generation model and method and device for generating clothing image

Publications (2)

Publication Number Publication Date
CN115147508A true CN115147508A (en) 2022-10-04
CN115147508B CN115147508B (en) 2023-09-22

Family

ID=83409870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210769171.5A Active CN115147508B (en) 2022-06-30 2022-06-30 Training of clothing generation model and method and device for generating clothing image

Country Status (1)

Country Link
CN (1) CN115147508B (en)

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010005432A1 (en) * 1999-12-28 2001-06-28 Toshiya Takahashi Image decoding apparatus and image coding apparatus
US6307975B1 (en) * 1996-11-05 2001-10-23 Sony Corporation Image coding technique employing shape and texture coding
JP2003067775A (en) * 2001-08-24 2003-03-07 Japan Science & Technology Corp Texture mapping method, texture mapping processing program, and computer-readable storage medium storing the program
US20180181802A1 (en) * 2016-12-28 2018-06-28 Adobe Systems Incorporated Recognizing combinations of body shape, pose, and clothing in three-dimensional input images
GB201902524D0 (en) * 2019-02-25 2019-04-10 Facesoft Ltd Joint shape and texture decoders for three-dimensional rendering
US20190147642A1 (en) * 2017-11-15 2019-05-16 Google Llc Learning to reconstruct 3d shapes by rendering many 3d views
CN110659958A (en) * 2019-09-06 2020-01-07 电子科技大学 Clothing matching generation method based on generation of countermeasure network
US10552667B1 (en) * 2019-08-19 2020-02-04 Neon Evolution Inc. Methods and systems for image processing
WO2020199693A1 (en) * 2019-03-29 2020-10-08 中国科学院深圳先进技术研究院 Large-pose face recognition method and apparatus, and device
US10818043B1 (en) * 2019-04-24 2020-10-27 Adobe Inc. Texture interpolation using neural networks
CN112053408A (en) * 2020-09-04 2020-12-08 清华大学 Face image compression method and device based on deep learning
CN112215050A (en) * 2019-06-24 2021-01-12 北京眼神智能科技有限公司 Nonlinear 3DMM face reconstruction and posture normalization method, device, medium and equipment
WO2021057426A1 (en) * 2019-09-29 2021-04-01 腾讯科技(深圳)有限公司 Method and apparatus for training image fusion processing model, device, and storage medium
CN112669343A (en) * 2021-01-04 2021-04-16 桂林电子科技大学 Zhuang minority nationality clothing segmentation method based on deep learning
US11055514B1 (en) * 2018-12-14 2021-07-06 Snap Inc. Image face manipulation
CN113160035A (en) * 2021-04-16 2021-07-23 浙江工业大学 Human body image generation method based on posture guidance, style and shape feature constraints
CN113838176A (en) * 2021-09-16 2021-12-24 网易(杭州)网络有限公司 Model training method, three-dimensional face image generation method and equipment
US20210407216A1 (en) * 2020-11-09 2021-12-30 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for generating three-dimensional virtual image, and storage medium
CN114240954A (en) * 2021-12-16 2022-03-25 推想医疗科技股份有限公司 Network model training method and device and image segmentation method and device
WO2022089166A1 (en) * 2020-11-02 2022-05-05 腾讯科技(深圳)有限公司 Facial image processing method and apparatus, facial image display method and apparatus, and device
CN114450719A (en) * 2019-09-30 2022-05-06 Oppo广东移动通信有限公司 Human body model reconstruction method, reconstruction system and storage medium

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6307975B1 (en) * 1996-11-05 2001-10-23 Sony Corporation Image coding technique employing shape and texture coding
US20010005432A1 (en) * 1999-12-28 2001-06-28 Toshiya Takahashi Image decoding apparatus and image coding apparatus
JP2003067775A (en) * 2001-08-24 2003-03-07 Japan Science & Technology Corp Texture mapping method, texture mapping processing program, and computer-readable storage medium storing the program
US20180181802A1 (en) * 2016-12-28 2018-06-28 Adobe Systems Incorporated Recognizing combinations of body shape, pose, and clothing in three-dimensional input images
US20190147642A1 (en) * 2017-11-15 2019-05-16 Google Llc Learning to reconstruct 3d shapes by rendering many 3d views
US11055514B1 (en) * 2018-12-14 2021-07-06 Snap Inc. Image face manipulation
WO2020174215A1 (en) * 2019-02-25 2020-09-03 Huawei Technologies Co., Ltd. Joint shape and texture decoders for three-dimensional rendering
GB201902524D0 (en) * 2019-02-25 2019-04-10 Facesoft Ltd Joint shape and texture decoders for three-dimensional rendering
WO2020199693A1 (en) * 2019-03-29 2020-10-08 中国科学院深圳先进技术研究院 Large-pose face recognition method and apparatus, and device
US10818043B1 (en) * 2019-04-24 2020-10-27 Adobe Inc. Texture interpolation using neural networks
CN112215050A (en) * 2019-06-24 2021-01-12 北京眼神智能科技有限公司 Nonlinear 3DMM face reconstruction and posture normalization method, device, medium and equipment
US10552667B1 (en) * 2019-08-19 2020-02-04 Neon Evolution Inc. Methods and systems for image processing
CN110659958A (en) * 2019-09-06 2020-01-07 电子科技大学 Clothing matching generation method based on generation of countermeasure network
WO2021057426A1 (en) * 2019-09-29 2021-04-01 腾讯科技(深圳)有限公司 Method and apparatus for training image fusion processing model, device, and storage medium
CN114450719A (en) * 2019-09-30 2022-05-06 Oppo广东移动通信有限公司 Human body model reconstruction method, reconstruction system and storage medium
CN112053408A (en) * 2020-09-04 2020-12-08 清华大学 Face image compression method and device based on deep learning
WO2022089166A1 (en) * 2020-11-02 2022-05-05 腾讯科技(深圳)有限公司 Facial image processing method and apparatus, facial image display method and apparatus, and device
US20210407216A1 (en) * 2020-11-09 2021-12-30 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for generating three-dimensional virtual image, and storage medium
CN112669343A (en) * 2021-01-04 2021-04-16 桂林电子科技大学 Zhuang minority nationality clothing segmentation method based on deep learning
CN113160035A (en) * 2021-04-16 2021-07-23 浙江工业大学 Human body image generation method based on posture guidance, style and shape feature constraints
CN113838176A (en) * 2021-09-16 2021-12-24 网易(杭州)网络有限公司 Model training method, three-dimensional face image generation method and equipment
CN114240954A (en) * 2021-12-16 2022-03-25 推想医疗科技股份有限公司 Network model training method and device and image segmentation method and device

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
SHIVASHANKAR S等: "Galois Field-based Approach for Rotation and Scale Invariant Texture Classification", INTERNATIONAL JOURNAL OF IMAGE *
刘晨曦 等: "Image object visual saliency judgment based on multi-feature combination", COMPUTER ENGINEERING AND APPLICATIONS *
曾志;吴财贵;唐权华;余嘉禾;李雅晴;高健;: "基于多特征融合和深度学习的商品图像分类", 计算机工程与设计, no. 11 *
杜明坤;王茜仪;蔡星宇;: "基于融合特征和极限学习机的鞋印图像识别方法研究", 电子技术, no. 10 *
武海燕;李卫平;: "基于形状描述符和孪生神经网络的纹理分割算法", 微电子学与计算机, no. 04 *
谢玉鹏;吴海燕;: "基于AAM的人脸图像描述与编码", 计算机仿真, no. 06 *
黄静;王希;齐东旭;唐泽圣;: "基于二值掩码图像的图像合成方法及其应用", 计算机辅助设计与图形学学报, no. 05 *

Also Published As

Publication number Publication date
CN115147508B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
US11935167B2 (en) Method and apparatus for virtual fitting
CN113129450B (en) Virtual fitting method, device, electronic equipment and medium
CN109308681A (en) Image processing method and device
CN112784765B (en) Method, apparatus, device and storage medium for recognizing motion
CN114723888B (en) Three-dimensional hair model generation method, device, equipment, storage medium and product
CN111861867A (en) Image background blurring method and device
CN116071619A (en) Training method of virtual fitting model, virtual fitting method and electronic equipment
JP2023131117A (en) Joint perception model training, joint perception method, device, and medium
CN111768467A (en) Image filling method, device, equipment and storage medium
CN112562045B (en) Method, apparatus, device and storage medium for generating model and generating 3D animation
CN109829520A (en) Image processing method and device
CN117422851A (en) Virtual clothes changing method and device and electronic equipment
CN115147508B (en) Training of clothing generation model and method and device for generating clothing image
CN114140320B (en) Image migration method and training method and device of image migration model
CN115409951A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115147526B (en) Training of clothing generation model and method and device for generating clothing image
CN114419182A (en) Image processing method and device
CN114529649A (en) Image processing method and device
Lai et al. Keypoints-Based 2D Vir tual Try-on Network System
CN115082624A (en) Human body model construction method and device, electronic equipment and storage medium
CN114170403A (en) Virtual fitting method, device, server and storage medium
CN115147681B (en) Training of clothing generation model and method and device for generating clothing image
CN114758391B (en) Hair style image determining method, device, electronic equipment, storage medium and product
CN114820908B (en) Virtual image generation method and device, electronic equipment and storage medium
CN112764649B (en) Virtual image generation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant