CN117409141A - Virtual clothing wearing method and device, live broadcast system, electronic equipment and medium - Google Patents

Virtual clothing wearing method and device, live broadcast system, electronic equipment and medium Download PDF

Info

Publication number
CN117409141A
CN117409141A CN202311388827.XA CN202311388827A CN117409141A CN 117409141 A CN117409141 A CN 117409141A CN 202311388827 A CN202311388827 A CN 202311388827A CN 117409141 A CN117409141 A CN 117409141A
Authority
CN
China
Prior art keywords
image
garment
model
clothing
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311388827.XA
Other languages
Chinese (zh)
Inventor
宫凯程
姚粤汉
陈增海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN202311388827.XA priority Critical patent/CN117409141A/en
Publication of CN117409141A publication Critical patent/CN117409141A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Software Systems (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Databases & Information Systems (AREA)
  • Development Economics (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to a virtual garment wearing method, a virtual garment wearing device, a virtual garment wearing live broadcast system, electronic equipment and a computer readable storage medium; the method comprises the following steps: reconstructing a corresponding 3D human model according to the input 2D human image; calculating a similarity transformation matrix of a 3D clothing model, and aligning the 3D clothing model with the 3D human body model according to the similarity transformation matrix; rendering the aligned 3D garment model into a 2D garment image and acquiring an image mask of a garment region; fusing the 2D clothing image and the 2D human body image according to the image mask to obtain a virtual try-on image; according to the technical scheme, the wearing effect which is more real and attached than that of the 2D technical scheme can be achieved, and the method is suitable for various application scenes such as virtual fitting of various network platforms, gift special effects in live broadcast business and the like.

Description

Virtual clothing wearing method and device, live broadcast system, electronic equipment and medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a virtual garment wearing method, a virtual garment wearing device, a virtual garment wearing system, an electronic device, and a computer readable storage medium.
Background
With the development of the network live broadcast technology, various image processing technologies are widely applied to the network live broadcast, so that the propagation effect of high-quality content shared in the network live broadcast can be improved. The virtual clothing wearing technology is a virtual image processing technology capable of helping a user to quickly experience new clothing, and complicated operations such as removing original clothing and replacing new clothing are not needed, so that the virtual clothing wearing technology is not only applied to video live broadcast, but also widely used in various electronic commerce platforms
The traditional virtual clothing wearing technology mainly adopts a 2D technical scheme, generally attaches 2D clothing pictures to 2D human body pictures by means of a human body key point algorithm and a semantic segmentation algorithm, however, as the 2D technology cannot completely describe complete 3D information, the wearing effect under complex postures such as shielding, body tilting and the like is not real enough, the fitting is carried out, the application scene is limited, and therefore, the network live broadcast service use requirement is difficult to meet, and the network live broadcast effect is influenced.
Disclosure of Invention
Accordingly, in order to solve the above-mentioned problems, it is necessary to provide a virtual garment wearing method, a virtual garment wearing device, a virtual garment wearing live broadcast system, an electronic device, and a computer readable storage medium, so as to improve the try-on effect.
In a first aspect, the present application provides a virtual garment wearing method, comprising:
reconstructing a corresponding 3D human model according to the input 2D human image;
calculating a similarity transformation matrix of a 3D clothing model, and aligning the 3D clothing model with the 3D human body model according to the similarity transformation matrix;
rendering the aligned 3D garment model into a 2D garment image and acquiring an image mask of a garment region;
and fusing the 2D clothing image and the 2D human body image according to the image mask to obtain a virtual try-on image.
In one embodiment, reconstructing a corresponding 3D mannequin from the input 2D mannequin image includes:
extracting image characteristics from the input 2D human body image, and predicting reconstruction coefficients of the SMPL human body base model by using a regression network;
and acquiring a corresponding 3D human model according to the reconstruction coefficient and combining the SMPL human base model.
In one embodiment, computing a similarity transformation matrix for a 3D garment model and aligning the 3D garment model with the 3D mannequin according to the similarity transformation matrix includes:
selecting N first vertices on the average mannequin;
selecting N second vertexes at the same position as the first vertexes on the 3D clothing model, and acquiring first 3D coordinates of the second vertexes;
acquiring second 3D coordinates of the N first vertexes on the 3D human body model;
calculating a similar transformation matrix according to the first 3D coordinates and the second 3D coordinates;
and aligning the position, the size and the angle of the 3D clothing model with the 3D human body model according to the similarity transformation matrix.
In one embodiment, four first vertices are selected on the left shoulder, right shoulder, left waist, and right waist of the average mannequin;
selecting four second vertexes of the first vertex position on the 3D clothing model, and acquiring first 3D coordinates corresponding to the four second vertexes;
acquiring second 3D coordinates of the four first vertexes corresponding to the 3D human body model;
the similarity transformation matrix comprises a rotation matrix R and a translation matrix T, wherein the rotation matrix R is a 3×3 matrix, and the translation matrix T is a 1×3 matrix.
In one embodiment, aligning the position, size, and angle of the 3D garment model with the 3D mannequin according to the similarity transformation matrix includes:
acquiring third 3D coordinates corresponding to each vertex on the 3D garment model;
according to the third 3D coordinates, the rotation matrix R and the translation matrix T, calculating a fourth 3D coordinate corresponding to each vertex on the 3D garment model, where the calculation formula is as follows:
C n =C o ×R+T
wherein C is o Representing the third 3D coordinate, C n Representing a fourth 3D coordinate, R representing a rotation matrix, T representing a translation matrix, and "×" representing multiplication;
and respectively aligning the position, the size and the angle of the 3D clothing model with the 3D human body model according to the fourth 3D coordinates.
In one embodiment, rendering the aligned 3D garment model into a 2D garment image and acquiring an image mask of the garment region includes:
rendering the aligned 3D garment model into a 2D garment image; wherein, the pixel value of the clothing region of the 2D clothing image is a non-0 value, and the pixel values of other regions are 0 values;
and determining an image mask of the clothing region according to the pixel values of the 2D clothing image.
In one embodiment, fusing the 2D clothing image and the 2D human body image according to the image mask to obtain a virtual try-on image includes:
intercepting the 2D clothing image according to the image mask to obtain a clothing region image;
and overlapping the clothing region image on the 2D human body image, and fusing to obtain a virtual try-on image.
In a second aspect, the present application provides a virtual garment wearing apparatus comprising:
the reconstruction module is used for reconstructing a corresponding 3D human body model according to the input 2D human body image;
the alignment module is used for calculating a similar transformation matrix of the 3D clothing model and aligning the 3D clothing model with the 3D human body model according to the similar transformation matrix;
the rendering module is used for rendering the aligned 3D garment model into a 2D garment image, and obtaining an image mask where a garment region is located according to the 2D garment image;
and the fusion module is used for fusing the 2D clothing image and the 2D human body image according to the image mask to obtain a virtual try-on image.
In a third aspect, the present application provides a live broadcast system, comprising: the system comprises a main broadcasting end, a spectator end and a live broadcasting server; wherein, the anchor end and the audience end are respectively connected to the live broadcast server through a communication network;
the main broadcasting end is used for accessing a main broadcasting of the live broadcasting room, collecting main broadcasting live video streams and uploading the live broadcasting video streams to the live broadcasting server;
the live broadcast server is used for forwarding the live broadcast video of the anchor broadcast to the audience end, and wearing the 3D garment model on the live broadcast video of the anchor broadcast by utilizing the virtual garment wearing method to obtain a virtual try-on image;
the audience terminal is used for accessing audience users in a live broadcast room, receiving the live video of the anchor broadcast and playing the virtual try-on image.
In a fourth aspect, the present application provides an electronic device comprising a memory storing a computer program and a processor implementing the steps of the virtual garment wearing method when the computer program is executed by the processor.
In a fifth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of the virtual garment wearing method.
According to the technical scheme provided by the embodiments, based on the 3D human body model reconstructed by the 3D human body reconstruction algorithm, the 3D garment model is worn on the reconstructed 3D human body model, and then image fusion is carried out to obtain a virtual try-on image; according to the technical scheme, the 3D information is utilized to render the 2D image, so that the wearing effect which is more real and attached than that of the 2D technical scheme can be realized, and the method is applicable to various application scenes such as virtual fitting of various network platforms, gift special effects in live broadcast business and the like.
Drawings
FIG. 1 is a schematic illustration of an exemplary live service application scenario;
FIG. 2 is a flow chart of a virtual garment wearing method of one embodiment;
FIG. 3 is a flow chart of a method of aligning a 3D garment model with a 3D mannequin according to one embodiment;
FIG. 4 is a schematic diagram of an exemplary virtual garment wearing method algorithm;
FIG. 5 is a schematic diagram of a virtual garment wearing apparatus according to one embodiment;
FIG. 6 is a schematic diagram of an exemplary live system architecture;
fig. 7 is a schematic structural diagram of an electronic device of an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The technical scheme provided by the embodiment of the application can be applied to an application scene of the related method shown in fig. 1, fig. 1 is a schematic diagram of an example live broadcast service application scene, and the live broadcast system can comprise a live broadcast server, a main broadcast end and an audience end, wherein the main broadcast end and the audience end are in data communication with the live broadcast server through a communication network, so that the main broadcast of the main broadcast end and audience users of the audience end can conduct real-time network live broadcast. The terminal devices of the anchor terminal and the audience terminal can be, but not limited to, various personal computers, notebook computers, smart phones and tablet computers, and the live broadcast server can be realized by an independent server or a server cluster formed by a plurality of servers.
Embodiments of the virtual garment wearing method of the present application are described below, where the present application may be applied to virtual fitting on various network platforms and virtual garment wearing scenarios in live video broadcast; referring to fig. 2, fig. 2 is a flowchart of a virtual garment wearing method according to one embodiment, which may include the steps of:
s10, reconstructing a corresponding 3D human body model according to the input 2D human body image.
In this step, for one input 2D human body image, a corresponding 3D human body model may be acquired based on a 3D reconstruction algorithm.
In one embodiment, taking HMR (Human Mesh Recovery, human shape recovery) algorithm as an example, firstly extracting image features of an input 2D human image; then predicting three reconstruction coefficients of an SMPL (skin Multi-Person Linear) human body base model by using a regression network, wherein the three reconstruction coefficients comprise shape parameters, post parameters and camera parameters; and then, according to the shape parameter and the phase parameter which are predicted by the HMR model, the corresponding 3D human model is obtained by combining with the SMPL human base model reconstruction.
The shape parameter is a shape parameter and is used for controlling the height, the fatness and the thinness of the 3D human body obtained based on the SMPL stretching base model; the phase parameter is a gesture parameter and is used for controlling the action, gesture and the like of the 3D human body; the camera parameter is a camera parameter used for controlling the position, angle and the like of the camera when the 2D human body image is rendered.
For example, the 2D human body image may be input into the encoder to obtain the image feature, and the image feature is sent into the iterative 3D regression module to infer the phase parameter, shape parameter and camera parameter of the 3D human body, the human body is projected onto the labeled 2D key points, and the derived reconstruction parameter is used to input the discriminator of the countermeasure network to determine the mesh data of the human body from the real human body.
S20, calculating a similarity transformation matrix of the 3D clothing model, and aligning the 3D clothing model with the 3D human body model according to the similarity transformation matrix.
In the step, for the 3D garment model to be worn, a similar transformation matrix is calculated by selecting a certain number of key points, and then the 3D garment model is aligned with the 3D human body model in terms of position, size, angle and the like according to the similar transformation matrix.
In one embodiment, referring to fig. 3, fig. 3 is a flowchart of a method for aligning a 3D clothing model with a 3D mannequin according to one embodiment, step S20 may include the steps of:
s201, selecting N first vertexes on the average human body model.
For example, four first vertices may be selected as keypoints on the left shoulder, right shoulder, left waist, and right waist of the average mannequin; for example, vertices corresponding to the left shoulder, right shoulder, left waist, and right waist are selected on the average mannequin M of SMPL, and the numbers v1, v2, v3, v4 of the 4 first vertices are recorded.
S202, selecting N second vertexes at the same position as the first vertexes from the 3D clothing model, and acquiring first 3D coordinates of the second vertexes.
Illustratively, four second vertexes of the first vertex position are selected on the 3D clothing model, and first 3D coordinates corresponding to the four second vertexes are obtained; for example, four second vertices at the same position are selected on the 3D clothing model C, and the first 3D coordinates corresponding to the 4 vertices are recorded as C1, C2, C3, and C4.
S203, second 3D coordinates of the N first vertexes on the 3D human body model are obtained.
Specifically, after the 3D mannequin P in the image is predicted in step S10, the second 3D coordinates of the first vertices corresponding to the numbers v1, v2, v3, and v4 on the 3D mannequin P are taken out, and are denoted as P1, P2, P3, and P4, respectively.
S204, calculating a similar transformation matrix according to the first 3D coordinates and the second 3D coordinates.
Specifically, the similarity transformation matrix includes a rotation matrix R, which is a 3×3 matrix, and a translation matrix T, which is a 1×3 matrix.
For example, a similarity transformation matrix is calculated from the first 3D coordinates c1, c2, c3, c4 and the second 3D coordinates p1, p2, p3, p4, comprising a rotation matrix R and a translation matrix T, wherein the rotation matrix R is a 3x3 matrix, which may be expressed asr 1 -r 9 Elements representing a matrix; the translation matrix T is a 1x3 matrix, which can be expressed as [ T ] 1 t 2 t 3 ],t 1 -t 9 Representing the elements of the matrix.
S205, aligning the position, the size and the angle of the 3D clothing model with the 3D human body model according to the similarity transformation matrix.
In one embodiment, the alignment process may specifically include the steps of:
a. and obtaining third 3D coordinates corresponding to each vertex on the 3D clothing model.
b. According to the third 3D coordinates, the rotation matrix R and the translation matrix T, calculating a fourth 3D coordinate corresponding to each vertex on the 3D garment model, where the calculation formula is as follows:
C n =C o ×R+T
wherein C is o Representing the third 3D coordinate, C n Representing the fourth 3D coordinate, R representing the rotation matrix, T representing the translation matrix, and "×" representing the multiplication.
c. And respectively aligning the position, the size and the angle of the 3D clothing model with the 3D human body model according to the fourth 3D coordinates.
According to the scheme of the embodiment, the new vertex coordinates of the 3D garment model are obtained by multiplying the 3D coordinates of all vertices on the 3D garment model by the rotation matrix R and the translation matrix T, so that the purpose of respectively aligning the position, the size and the angle of the 3D garment model with the 3D human model is achieved, and the 3D garment model can be more truly worn on the reconstructed 3D human model.
And S30, rendering the aligned 3D garment model into a 2D garment image and acquiring an image mask of a garment region.
In the step, after the alignment treatment, the 3D garment model C and the 3D human body model P are aligned in terms of position, size, angle and the like, and then the 3D garment model is rendered into a 2D garment image, and an image Mask of a garment region is obtained, so that virtual wearing is performed on the 2D garment.
In one embodiment, step S30 may include: rendering the aligned 3D garment model into a 2D garment image, and determining an image mask of a garment region according to pixel values of the 2D garment image; wherein, the pixel value of the clothing region of the 2D clothing image is a non-0 value, and the pixel values of other regions are 0 values.
In the above embodiment, the pixel value of the clothing region is set to be a non-0 value, and the pixel values of the other regions are set to be 0 values, so that the clothing region range can be accurately determined according to the pixel values during the try-on.
And S40, fusing the 2D clothing image and the 2D human body image according to the image mask to obtain a virtual try-on image.
In one embodiment, in the above fusion process, the 2D clothing image may be intercepted according to the image mask to obtain a clothing region image; and overlapping the clothing region image on the 2D human body image, and fusing to obtain a virtual try-on image.
Based on the virtual garment wearing method of the above embodiment, as shown in fig. 4, fig. 4 is an exemplary algorithm schematic diagram of the virtual garment wearing method, firstly, reconstructing a 3D human body model by using a 2D human body image by using a 3D human body reconstruction algorithm and using a plurality of selected key points; meanwhile, key points at corresponding positions are selected on the 3D garment model to be worn, a similarity transformation matrix is calculated, and the 3D garment model is aligned with the 3D human body model by utilizing the similarity transformation matrix; rendering the 3D garment model into a 2D garment image, and extracting an image mask of a garment region; finally, fusing the 2D garment image with the 2D human body image by utilizing an image mask to obtain a virtual try-on image; therefore, the 3D clothing wearing effect is achieved, the 3D information advantage is fully utilized, the wearing effect which is more real and attached than the 2D virtual wearing technical scheme can be achieved, and the 3D clothing wearing method is applicable to more application scenes.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the present application also provides an apparatus for implementing the above related methods. The implementation of the solution provided by the apparatus is similar to that described in the above method, so the specific limitation of one or more embodiments of the related apparatus provided below may be referred to the limitation of the related method hereinabove, and will not be repeated herein.
Referring to fig. 5, fig. 5 is a schematic structural view of a virtual garment wearing apparatus according to one embodiment, the apparatus comprising:
a reconstruction module 10 for reconstructing a corresponding 3D human model from the input 2D human image;
an alignment module 20, configured to calculate a similarity transformation matrix of a 3D clothing model, and align the 3D clothing model with the 3D mannequin according to the similarity transformation matrix;
a rendering module 30, configured to render the aligned 3D garment model into a 2D garment image, and obtain an image mask where a garment region is located according to the 2D garment image;
and the fusion module 40 is configured to fuse the 2D clothing image and the 2D human body image according to the image mask to obtain a virtual try-on image.
Each of the modules in the above-described apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or independent of a processor in the electronic device, or may be stored in software in a memory in the electronic device, so that the processor may call and execute operations corresponding to the above modules.
The virtual garment wearing device of the present embodiment may execute a virtual garment wearing method provided in the embodiments of the present application, and its implementation principle is similar, and actions executed by each module in the virtual garment wearing device of each embodiment of the present application correspond to steps in the virtual garment wearing method of each embodiment of the present application, and detailed description of functions of each module in the virtual garment wearing device may be specifically referred to the description in the corresponding virtual garment wearing method shown in the foregoing, which is not repeated herein.
An embodiment of a live system is set forth below.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an exemplary live broadcast system, where the live broadcast system includes: the system comprises a main broadcasting end, a spectator end and a live broadcasting server; wherein the anchor end and the audience end are respectively connected to the live broadcast server through communication networks.
For the anchor terminal, the anchor terminal is used for accessing anchor users in a live broadcast room and collecting anchor live video streams to upload to a live broadcast server; for a live broadcast server, the live broadcast server is used for forwarding live broadcast video of a host broadcast to a spectator terminal, and a virtual clothing wearing method is used for wearing a 3D clothing model on the live broadcast video of the host broadcast to obtain a virtual try-on image; for the audience, the audience terminal is used for accessing audience users in a live broadcast room and receiving the live video of the anchor broadcast and the virtual try-in image for playing.
As shown in fig. 6, assuming that a live video picture of a host is watched by a spectator user A, B, C … … through an App client, when the host user uses a virtual dressing special effect, a 3D clothing model can be virtually worn on a 2D human body image of the host video picture by the live server, the virtual dressing method provided by the embodiment of the present application is utilized to virtually wear the 3D clothing model on the 2D human body image to obtain a virtual dressing image, so that the special effect of the virtual dressing is achieved, and then the live video stream including the virtual dressing image is sent to the spectator end to which each spectator user A, B, C … … belongs by the live server, and each spectator user views the live video of the host and the virtual dressing image of the host through the spectator end.
Embodiments of the electronic device and computer-readable storage medium of the present application are described below.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an exemplary electronic device, which may be a device for a live server application or a device for a viewer application and a host application, and includes a processor, a memory, and a network interface connected through a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the electronic equipment is used for storing data such as a face image data set and the like. The network interface of the electronic device is used for being connected with external equipment through a communication network. The computer program is executed by a processor to implement the relevant methods provided by the embodiments of the present application.
It will be appreciated by those skilled in the art that the above-described embodiments provide an electronic device structure, which is merely a block diagram of a portion of the structure related to the present application, and does not constitute a limitation of the electronic device to which the present application is applied, and a specific electronic device may include more or less components than those shown in the drawings, or may combine some components, or have a different arrangement of components.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the methods of the embodiments described above. Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
It should be noted that, user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (11)

1. A method of wearing virtual apparel, comprising:
reconstructing a corresponding 3D human model according to the input 2D human image;
calculating a similarity transformation matrix of a 3D clothing model, and aligning the 3D clothing model with the 3D human body model according to the similarity transformation matrix;
rendering the aligned 3D garment model into a 2D garment image and acquiring an image mask of a garment region;
and fusing the 2D clothing image and the 2D human body image according to the image mask to obtain a virtual try-on image.
2. The virtual garment wearing method according to claim 1, wherein reconstructing the corresponding 3D mannequin from the input 2D mannequin includes:
extracting image characteristics from the input 2D human body image, and predicting reconstruction coefficients of the SMPL human body base model by using a regression network;
and acquiring a corresponding 3D human model according to the reconstruction coefficient and combining the SMPL human base model.
3. The virtual garment wearing method of claim 2, wherein calculating a similarity transformation matrix for a 3D garment model and aligning the 3D garment model with the 3D mannequin according to the similarity transformation matrix comprises:
selecting N first vertices on the average mannequin;
selecting N second vertexes at the same position as the first vertexes on the 3D clothing model, and acquiring first 3D coordinates of the second vertexes;
acquiring second 3D coordinates of the N first vertexes on the 3D human body model;
calculating a similar transformation matrix according to the first 3D coordinates and the second 3D coordinates;
and aligning the position, the size and the angle of the 3D clothing model with the 3D human body model according to the similarity transformation matrix.
4. A virtual garment wearing method according to claim 3, wherein four first vertices are selected on the left shoulder, right shoulder, left waist, and right waist of the average mannequin;
selecting four second vertexes of the first vertex position on the 3D clothing model, and acquiring first 3D coordinates corresponding to the four second vertexes;
acquiring second 3D coordinates of the four first vertexes corresponding to the 3D human body model;
the similarity transformation matrix comprises a rotation matrix R and a translation matrix T, wherein the rotation matrix R is a 3×3 matrix, and the translation matrix T is a 1×3 matrix.
5. The virtual garment wearing method of claim 4, wherein aligning the position, size, and angle of the 3D garment model with the 3D mannequin according to the similarity transformation matrix comprises:
acquiring third 3D coordinates corresponding to each vertex on the 3D garment model;
according to the third 3D coordinates, the rotation matrix R and the translation matrix T, calculating a fourth 3D coordinate corresponding to each vertex on the 3D garment model, where the calculation formula is as follows:
C n =C o ×R+T
wherein C is o Representing the third 3D coordinate, C n Representing a fourth 3D coordinate, R representing a rotation matrix, T representing a translation matrix, and "×" representing multiplication;
and respectively aligning the position, the size and the angle of the 3D clothing model with the 3D human body model according to the fourth 3D coordinates.
6. The virtual garment wearing method of claim 1, wherein rendering the aligned 3D garment model into a 2D garment image and obtaining an image mask of a garment region comprises:
rendering the aligned 3D garment model into a 2D garment image; wherein, the pixel value of the clothing region of the 2D clothing image is a non-0 value, and the pixel values of other regions are 0 values;
and determining an image mask of the clothing region according to the pixel values of the 2D clothing image.
7. The virtual garment wearing method according to claim 6, wherein fusing the 2D garment image and the 2D human body image according to the image mask to obtain a virtual try-on image comprises:
intercepting the 2D clothing image according to the image mask to obtain a clothing region image;
and overlapping the clothing region image on the 2D human body image, and fusing to obtain a virtual try-on image.
8. A virtual garment wearing apparatus, comprising:
the reconstruction module is used for reconstructing a corresponding 3D human body model according to the input 2D human body image;
the alignment module is used for calculating a similar transformation matrix of the 3D clothing model and aligning the 3D clothing model with the 3D human body model according to the similar transformation matrix;
the rendering module is used for rendering the aligned 3D garment model into a 2D garment image, and obtaining an image mask where a garment region is located according to the 2D garment image;
and the fusion module is used for fusing the 2D clothing image and the 2D human body image according to the image mask to obtain a virtual try-on image.
9. A live broadcast system, comprising: the system comprises a main broadcasting end, a spectator end and a live broadcasting server; wherein, the anchor end and the audience end are respectively connected to the live broadcast server through a communication network;
the main broadcasting end is used for accessing a main broadcasting of the live broadcasting room, collecting main broadcasting live video streams and uploading the live broadcasting video streams to the live broadcasting server;
the live broadcast server is used for forwarding the live broadcast video of the main broadcast to a spectator side and obtaining a virtual try-on image by wearing the 3D clothing model on the live broadcast video of the main broadcast by using the virtual clothing wearing method of any one of claims 1 to 7;
the audience terminal is used for accessing audience users in a live broadcast room, receiving the live video of the anchor broadcast and playing the virtual try-on image.
10. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the virtual garment wearing method of any one of claims 1-7.
11. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program when executed by a processor implements the steps of the virtual garment wearing method of any of claims 1-7.
CN202311388827.XA 2023-10-24 2023-10-24 Virtual clothing wearing method and device, live broadcast system, electronic equipment and medium Pending CN117409141A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311388827.XA CN117409141A (en) 2023-10-24 2023-10-24 Virtual clothing wearing method and device, live broadcast system, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311388827.XA CN117409141A (en) 2023-10-24 2023-10-24 Virtual clothing wearing method and device, live broadcast system, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN117409141A true CN117409141A (en) 2024-01-16

Family

ID=89490314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311388827.XA Pending CN117409141A (en) 2023-10-24 2023-10-24 Virtual clothing wearing method and device, live broadcast system, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117409141A (en)

Similar Documents

Publication Publication Date Title
Dong et al. Color-guided depth recovery via joint local structural and nonlocal low-rank regularization
WO2021008166A1 (en) Method and apparatus for virtual fitting
CN107248169B (en) Image positioning method and device
WO2023036160A1 (en) Video processing method and apparatus, computer-readable storage medium, and computer device
Chen et al. Face swapping: realistic image synthesis based on facial landmarks alignment
CN111047509A (en) Image special effect processing method and device and terminal
CN109783658A (en) Image processing method, device and storage medium
CN112766215A (en) Face fusion method and device, electronic equipment and storage medium
Niu et al. Image retargeting quality assessment based on registration confidence measure and noticeability-based pooling
CN110619670A (en) Face interchange method and device, computer equipment and storage medium
Song et al. Weakly-supervised stitching network for real-world panoramic image generation
Hong et al. PAR 2 Net: End-to-End Panoramic Image Reflection Removal
TWI711004B (en) Picture processing method and device
CN117409141A (en) Virtual clothing wearing method and device, live broadcast system, electronic equipment and medium
Zeng et al. Multi-view self-supervised learning for 3D facial texture reconstruction from single image
CN116977539A (en) Image processing method, apparatus, computer device, storage medium, and program product
CN115564639A (en) Background blurring method and device, computer equipment and storage medium
WO2022047662A1 (en) Method and system of neural network object recognition for warpable jerseys with multiple attributes
CN111652807B (en) Eye adjusting and live broadcasting method and device, electronic equipment and storage medium
CN111652023B (en) Mouth-type adjustment and live broadcast method and device, electronic equipment and storage medium
EP4085628A1 (en) System and method for dynamic images virtualisation
Huang et al. Linedl: Processing images line-by-line with deep learning
Zhao et al. Stripe sensitive convolution for omnidirectional image dehazing
Bai et al. Local-to-Global Panorama Inpainting for Locale-Aware Indoor Lighting Prediction
CN110910303A (en) Image style migration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination