CN117523142A - Virtual fitting method, virtual fitting device, electronic equipment and computer readable storage medium - Google Patents
Virtual fitting method, virtual fitting device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN117523142A CN117523142A CN202311510825.3A CN202311510825A CN117523142A CN 117523142 A CN117523142 A CN 117523142A CN 202311510825 A CN202311510825 A CN 202311510825A CN 117523142 A CN117523142 A CN 117523142A
- Authority
- CN
- China
- Prior art keywords
- target object
- target
- image
- area
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 239000003086 colorant Substances 0.000 claims description 8
- 210000002683 foot Anatomy 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 238000004590 computer program Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000008439 repair process Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 210000003423 ankle Anatomy 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 210000003371 toe Anatomy 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2012—Colour editing, changing, or manipulating; Use of colour codes
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Architecture (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the application discloses a virtual try-on method, a virtual try-on device, electronic equipment and a computer readable storage medium, wherein at least one garment is worn on a target object in an image of the target object by acquiring the image of the target object to be subjected to virtual try-on; based on the image of the target object, carrying out grid reconstruction on the target object to obtain a three-dimensional model of the target object; determining a wearing area corresponding to the target clothing on the three-dimensional model based on the target clothing to be tried on by the target object; determining a part shielding area of a part matched with the target clothes in an image of the target object; and comparing the wearing area with the part shielding area, and performing virtual fitting of the target clothes on the target object based on a comparison result, so that the authenticity of the virtual clothes after virtual fitting can be improved.
Description
Technical Field
The present application relates to the field of virtual technologies, and in particular, to a virtual try-on method, a virtual try-on device, an electronic device, and a computer readable storage medium.
Background
Along with the continuous development of the virtual technology, the application of the virtual technology in the life of people is becoming wider, and people can wear a certain virtual dress on an object through the virtual technology, so that the virtual fitting of the object on the virtual dress is realized. However, in the virtual try-on scene, because the modeling or the size of the virtual clothes to be tried on is special, the virtual clothes after being tried on are often not attached to the object, and sometimes, the phenomenon of liking can also occur, for example, after one object wears virtual shoes, the virtual shoes are too small, so that the object exposes toes, and the reality of the virtual clothes after virtual try-on is low.
Disclosure of Invention
The embodiment of the application provides a virtual try-on method, a virtual try-on device, electronic equipment and a computer readable storage medium, which can improve the authenticity of virtual clothes after virtual try-on.
In a first aspect, an embodiment of the present application provides a virtual fitting method, where the method includes:
acquiring an image of a target object to be subjected to virtual try-on, wherein at least one garment is worn on the target object in the image of the target object;
based on the image of the target object, carrying out grid reconstruction on the target object to obtain a three-dimensional model of the target object;
Determining a wearing area corresponding to the target clothing on the three-dimensional model based on the target clothing to be tried on by the target object;
determining a part shielding area of a part matched with the target clothes in the image of the target object;
and comparing the wearing area with the part shielding area, and performing virtual fitting of the target clothes on the target object based on a comparison result.
In a second aspect, an embodiment of the present application further provides a virtual try-on device, where the device includes:
the image acquisition module is used for acquiring an image of a target object to be subjected to virtual try-on, wherein at least one garment is worn on the target object in the image of the target object;
the grid reconstruction module is used for carrying out grid reconstruction on the target object based on the image of the target object to obtain a three-dimensional model of the target object;
the first area determining module is used for determining a wearing area corresponding to the target clothing on the three-dimensional model based on the target clothing to be tried on by the target object;
a second region determining module, configured to determine, in the image of the target object, a region shielding region of a region matching the target garment;
And the virtual try-on module is used for comparing the wearing area with the part shielding area and carrying out virtual try-on of the target clothes on the target object based on a comparison result.
In a third aspect, embodiments of the present application further provide an electronic device, including a memory storing a plurality of instructions; the processor loads instructions from the memory to perform steps in any of the virtual try-in methods provided by embodiments of the present application.
In a fourth aspect, embodiments of the present application further provide a computer-readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform steps in any one of the virtual try-on methods provided by the embodiments of the present application.
In the embodiment of the application, by acquiring the image of the target object to be subjected to virtual try-on, at least one garment is worn on the target object in the image of the target object. Then, based on the image of the target object, grid reconstruction is carried out on the target object to obtain a three-dimensional model of the target object, and based on target clothes to be tried on by the target object, a wearing area corresponding to the target clothes on the three-dimensional model is determined, and in the image of the target object, a part shielding area of a part matched with the target clothes is determined. Finally, comparing the wearing area with the part shielding area, and performing virtual fitting of the target clothes on the target object based on a comparison result, so that the wearing area corresponding to the target clothes and the part shielding area of the target object are determined to be compared, virtual fitting is performed based on a comparison result, and the authenticity of the virtual clothes after virtual fitting is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of an embodiment of a virtual try-on method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a target object provided in an embodiment of the present application;
FIG. 3 is a schematic representation of a three-dimensional model provided in an embodiment of the present application;
FIG. 4 is a schematic illustration of a wearing area provided in an embodiment of the present application;
FIG. 5 is a schematic view of a region of site occlusion provided in an embodiment of the present application;
FIG. 6 is a schematic illustration of the difference regions provided in an embodiment of the present application;
FIG. 7a is a schematic view of a background image provided by an embodiment of the present application;
FIG. 7b is a schematic illustration of a repaired image provided by an embodiment of the present application;
fig. 8 is a schematic structural diagram of a virtual fitting device according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Before explaining the embodiments of the present application in detail, some terms related to the embodiments of the present application are explained.
Wherein in the description of embodiments of the present application, the terms "first," "second," and the like may be used herein to describe various concepts, but such concepts are not limited by these terms unless otherwise specified. These terms are only used to distinguish one concept from another. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the application provides a virtual fitting method, a virtual fitting device, electronic equipment and a computer readable storage medium. Specifically, the virtual fitting method in the embodiment of the present application may be executed by an electronic device, where the electronic device may be a device such as a terminal or a server. The terminal may be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game console, a personal computer (PC, personal Computer), a personal digital assistant (Personal Digital Assistant, PDA), and the like, and the terminal may further include a client, which may be a game application client, a browser client carrying a game program, or an instant messaging client, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms.
For example, the electronic device is described by taking a terminal as an example, and the terminal may acquire an image of a target object to be subjected to virtual try-on, where at least one garment is worn on the target object in the image of the target object; based on the image of the target object, carrying out grid reconstruction on the target object to obtain a three-dimensional model of the target object; determining a wearing area corresponding to the target clothing on the three-dimensional model based on the target clothing to be tried on by the target object; determining a part shielding area of a part matched with the target clothes in the image of the target object; and comparing the wearing area with the part shielding area, and performing virtual fitting of the target clothes on the target object based on a comparison result.
Based on the above-mentioned problems, embodiments of the present application provide a virtual try-on method, apparatus, electronic device, and computer-readable storage medium, which can improve the authenticity of virtual clothes after virtual try-on.
The following detailed description is provided with reference to the accompanying drawings. The following description of the embodiments is not intended to limit the preferred embodiments. Although a logical order is depicted in the flowchart, in some cases the steps shown or described may be performed in an order different than depicted in the figures.
In this embodiment, a terminal is taken as an example for illustration, and this embodiment provides a virtual try-on method, as shown in fig. 1, a specific flow of the virtual try-on method may be as follows:
101. an image of a target object to be subjected to virtual fitting is acquired, and at least one garment is worn on the target object in the image of the target object.
The target object is an object to be subjected to virtual test at present, and the object may be a person, an animal, or the like, and may be specifically set according to requirements, which is not limited herein. Correspondingly, the image of the target object can be a picture, namely, the virtual try-on is directly carried out on a picture containing the target object; the image of the target object may also be an image in a video, i.e. a virtual try-on is performed on the image in the video.
In this embodiment, before a virtual try-on is performed on a certain object, the terminal may wear some clothes, when a clothes consistent with a clothes type for performing the virtual try-on exists in the clothes worn by the object, the clothes worn by the object may be replaced with the clothes worn by the object, and when a clothes consistent with the clothes type for performing the virtual try-on does not exist in the clothes worn by the object, the clothes for performing the virtual try-on may be newly added to the object.
The clothing includes, but is not limited to, clothing, shoes, hats, accessories, hair accessories, etc., and may be specifically set according to the needs, and is not limited herein.
102. And carrying out grid reconstruction on the target object based on the image of the target object to obtain a three-dimensional model of the target object.
It can be understood that, since the image of the target object is only one two-dimensional image, in order to perform virtual fitting better, in this embodiment, the terminal may perform grid reconstruction on the target object based on the image of the target object, so as to obtain a three-dimensional model of the target object, that is, implement three-dimensional virtual fitting, so that the effect after fitting is more real.
Specifically, the terminal may directly perform grid reconstruction on the target object to obtain a three-dimensional model of the target object, or the terminal may send a request to the server, so that the server may perform grid reconstruction on the target object in response to the request, and feedback the obtained three-dimensional model of the target object to the terminal. Wherein the mesh reconstruction of the target object can be performed by neural network technology and smpl technology.
For example, as shown in fig. 2 and 3, for the target object in fig. 2, the terminal may obtain a three-dimensional model of the target image in fig. 3 by performing mesh reconstruction.
In some embodiments, the reconstructing the mesh of the target object based on the image of the target object to obtain the three-dimensional model of the target object may include: based on the image of the target object, carrying out grid reconstruction on the target object to obtain a plurality of three-dimensional vertex coordinates of the target object, namely, different three-dimensional vertex coordinates form different parts of a human body; and obtaining a three-dimensional model of the target object based on the three-dimensional vertex coordinates.
103. And determining a wearing area corresponding to the target clothing on the three-dimensional model based on the target clothing to be tried on by the target object.
The target clothes are clothes which are needed to be tried on currently by a target object.
It can be appreciated that, since the target garment described above may be applied to a portion of the target object, i.e., a local area of the target object, such as a chain ornament, may be applied to a wrist, ankle, etc. of the target object, in this embodiment, when the terminal is considered to face the local try-on, it is necessary to determine where the target garment is to be applied to the target object, i.e., which area of the three-dimensional model is to be applied, based on the garment that the target object currently needs to be tried-on.
The wearing area may be an area covered by the target garment on the target object, for example, the shoe may cover a foot area of the target object, and the bracelet may cover a wrist area of the target object.
In some embodiments, because the target apparel is different, the target apparel covers different areas on the target object, so the determining, based on the target apparel to be tried on by the target object, a wearing area corresponding to the target apparel on the three-dimensional model may include: acquiring attribute information of the target clothes; and dividing regions on the three-dimensional model based on the attribute information of the target clothing, and determining the region corresponding to the divided target clothing as the wearing region.
Wherein, the attribute information of the target clothes is related information for indicating the position where the target clothes is to be covered on the target object, for example, the attribute information of the target clothes can include, but is not limited to, size information for indicating the size of the target clothes and associated position information for indicating the position where the target clothes is to be applied, such as the foot of the target object.
In some embodiments, when the attribute information includes size information and associated location information, and the mesh reconstruction is performed on the target object based on the image of the target object to obtain the three-dimensional model of the target object, the obtaining a plurality of three-dimensional vertex coordinates of the target object during the reconstruction may be performed to obtain the three-dimensional model of the target object based on the plurality of three-dimensional vertex coordinates, and when the area division is performed on the three-dimensional model based on the attribute information of the target garment, and the area corresponding to the divided target garment is determined as the wearing area, may include: and the terminal determines the three-dimensional vertex coordinates of the target corresponding to the target clothes based on the size information and the associated position information of the target clothes, namely, determines the three-dimensional vertex coordinates to be covered by the target clothes as the three-dimensional vertex coordinates of the target. Then, the area composed of the three-dimensional vertex coordinates of the target is determined as the wearing area.
In some embodiments, since there may be a plurality of target clothes, in order to better distinguish different wearing areas of the three-dimensional model, after the area division on the three-dimensional model, the method may further include: different areas are divided into different colors so as to display the different color areas on the three-dimensional model, and thus the different color areas represent different target clothes, for example, the color in the color area corresponding to the left foot is red, and the color in the color area corresponding to the right foot is green.
For example, for the three-dimensional model in fig. 3, the terminal may divide different areas in the three-dimensional model and assign the divided different areas to different colors, as shown in fig. 4, resulting in the display of the different color areas on the three-dimensional model in fig. 4.
Specifically, the terminal may directly determine the wearing area corresponding to the target garment on the three-dimensional model, or the terminal may send a request to the server, so that the server determines the wearing area corresponding to the target garment on the three-dimensional model in response to the request, and feeds back the obtained wearing area corresponding to the target garment to the terminal. The terminal can adopt a pyrender library or other server rendering tools to realize the determination of the wearing area.
104. And determining a part shielding area of a part matched with the target clothes in the image of the target object.
It can be understood that, before the terminal performs virtual try-on a certain object, the object may be worn with some clothes, and shapes and sizes of different parts of different objects are different, so in this embodiment, in order to better improve the reality of virtual try-on, the terminal needs to perform recognition analysis on the target object, that is, analyze relevant information corresponding to each part of the target object based on an image of the target object, so as to determine areas corresponding to each part of the target object, that is, each local area of the target object, so that through analyzing each part of the target object, the problem of partial try-on of the target object, possibly resulting in peril, is simply and efficiently solved, thereby improving user experience.
The above-mentioned periencing problem may be, for example, a phenomenon that some positions of the target object are exposed, for example, a shoe is in the middle of the toes, which occurs when the shoe shape does not match the bones of the target object.
The above-mentioned portion shielding area is an area which corresponds to each portion and can be shielded, for example, a portion shielding area corresponding to the foot of the target object, and the portion shielding area needs to include an area corresponding to the shoe currently worn by the target object.
For example, for the target object in fig. 2, the terminal may determine the areas corresponding to the different parts of the target object by performing recognition analysis on the target object, as shown in fig. 5, where the different colors displayed in fig. 5 correspond to the areas corresponding to the different parts of the target object.
In some embodiments, because the target object in the image is further worn with some clothing when analyzing the target object, in order to better promote the reality of the virtual try-on, determining, in the image of the target object, the part shielding area of the part matching with the target clothing may include: and the terminal divides each part of the target object in the image of the target object and determines the part matched with the target clothes. Then, a portion matching the target clothing in the image of the target object is analyzed to obtain the portion analysis information, and the portion analysis information includes at least one of portion size information and size information of the portion wearing clothing. And finally, determining a part shielding area of the part matched with the target clothes based on the part analysis information.
Specifically, the terminal may directly determine the location shielding area of the location matching the target garment, or the terminal may send a request to the server, so that the server determines the location shielding area of the location matching the target garment in response to the request, and feeds back the obtained location shielding area to the terminal. The terminal can determine the position shielding area by adopting a neural network technology, for example, the terminal can obtain the position analysis information by adopting the neural network technology, such as information of a coat, trousers, left shoes, right shoes, a head, left legs, right legs, left arms, right arms and the like.
105. And comparing the wearing area with the part shielding area, and performing virtual fitting of the target clothes on the target object based on a comparison result.
In this embodiment, the terminal obtains a comparison result by comparing the wearing area with the above-mentioned portion shielding area, where the comparison result is used to indicate that the target object shields a portion of the target garment so as to implement the real situation of virtual fitting, if the wearing area is smaller than the above-mentioned portion shielding area, it indicates that the target garment cannot completely shield a portion of the target object, and if the wearing area is greater than or equal to the above-mentioned portion shielding area, it indicates that the target garment can completely shield a portion of the target object, so that based on the comparison result, virtual fitting of the target garment is performed on the above-mentioned target object, thereby solving the problem of peril occurring when performing virtual fitting of a certain local or global area.
In some embodiments, after the region division is performed on the three-dimensional model, the terminal may assign different colors to the divided different regions so as to display the different color regions on the three-dimensional model, so that the comparing the wearing region with the region shielding region may be performed by comparing the color regions, so as to increase the comparison speed, for example, comparing the color region corresponding to the wearing region with the region shielding region.
In some embodiments, the performing the virtual fitting of the target apparel on the target object based on the comparison result may include: if the comparison result shows that the wearing area is smaller than the part shielding area, and the target clothes cannot completely shield a certain part of the target object, repairing a difference area between the wearing area and the part shielding area in the image of the target object to obtain a repaired image. And then, acquiring the image of the target clothes, and fusing the image of the target clothes with the repaired image to realize virtual try-on of the target object.
In this embodiment, the terminal pointedly repairs the peril portion through image repair, thereby improving the reality of local virtual try-on and improving the user experience. The terminal can directly repair, and the terminal can also send a request to the server so that the server responds to the request to repair and feed the repaired image back to the terminal.
For example, for the wearing area of the right foot of the target object in fig. 4 and the position shielding area of the right foot of the target object in fig. 5, it can be seen that the position shielding area is significantly higher than the wearing area, and the difference area in fig. 6, that is, the black area in fig. 6, that is, the ankle area of the right foot can be obtained by comparison.
In some embodiments, the repairing the difference region between the wearing region and the region shielding region in the image of the target object to obtain a repaired image may include: based on the background area in the image of the target object, a background image is generated in which the target object is absent, as shown in fig. 7a, and a partial image of the difference area in the background image is determined. Then, the image corresponding to the difference region in the image of the target object is replaced with the partial image, so as to obtain a repaired image, that is, an image obtained after performing virtual fitting of the target apparel on the target object, as shown in fig. 7 b.
In this embodiment, the terminal directly compares the difference region between the wearing region and the region shielding region, that is, determines where the target object is exposed, that is, determines where the target garment is not matched with the target object, so as to avoid affecting the image of the target object after virtual try-on by updating the unmatched place as a background.
Specifically, if the difference region is set to be m_2, the background image is inp_1, the image of the target object (the image that is not subjected to virtual fitting) is img, the image of the target object is processed to obtain a repaired image new_img, that is, the formula of the image obtained by performing virtual fitting of the target apparel on the target object is as follows:
new_img=m_2*inp_1+(1.0-m_2)*img
from the above, it can be seen that by acquiring an image of a target object to be virtually tried on, at least one garment is worn on the target object in the image of the target object. Then, based on the image of the target object, grid reconstruction is carried out on the target object to obtain a three-dimensional model of the target object, and based on target clothes to be tried on by the target object, a wearing area corresponding to the target clothes on the three-dimensional model is determined, and in the image of the target object, a part shielding area of a part matched with the target clothes is determined. Finally, comparing the wearing area with the part shielding area, and performing virtual fitting of the target clothes on the target object based on a comparison result, so that the wearing area corresponding to the target clothes and the part shielding area of the target object are determined to be compared, virtual fitting is performed based on a comparison result, and the authenticity of the virtual clothes after virtual fitting is improved.
In order to better implement the above method, the embodiment of the present application further provides a virtual fitting device, where the virtual fitting device may be specifically integrated in an electronic device, for example, a computer device, where the computer device may be a terminal, a server, or other devices.
The terminal can be a mobile phone, a tablet personal computer, an intelligent Bluetooth device, a notebook computer, a personal computer and other devices; the server may be a single server or a server cluster composed of a plurality of servers.
For example, in this embodiment, a specific integration of a virtual fitting device in a terminal will be taken as an example, and a method in this embodiment of the present application is described in detail, where this embodiment provides a virtual fitting device, as shown in fig. 8, where the virtual fitting device may include:
an image obtaining module 801, configured to obtain an image of a target object to be virtually tried on, where the target object is worn with at least one garment;
a mesh reconstruction module 802, configured to perform mesh reconstruction on the target object based on the image of the target object, so as to obtain a three-dimensional model of the target object;
a first region determining module 803, configured to determine, based on a target garment to be tried on by the target object, a wearing region corresponding to the target garment on the three-dimensional model;
A second region determining module 804, configured to determine, in the image of the target object, a region occlusion region of a region matching the target garment;
and a virtual fitting module 805 configured to compare the wearing area and the part shielding area, and perform virtual fitting of the target apparel on the target object based on a comparison result.
In some embodiments, the first area determining module 803 is specifically configured to:
acquiring attribute information of the target clothes;
and dividing regions on the three-dimensional model based on the attribute information of the target clothing, and determining the region corresponding to the divided target clothing as the wearing region.
In some embodiments, the attribute information includes size information and associated location information, and the mesh reconstruction module 802 is specifically configured to:
based on the image of the target object, carrying out grid reconstruction on the target object to obtain a plurality of three-dimensional vertex coordinates of the target object;
obtaining a three-dimensional model of the target object based on a plurality of the three-dimensional vertex coordinates;
the first area determining module 803 specifically is configured to:
determining a target three-dimensional vertex coordinate corresponding to the target clothing based on the size information and the associated position information of the target clothing;
And determining the area formed by the three-dimensional vertex coordinates of the target as the wearing area.
In some embodiments, the virtual fitting device further includes a color imparting module, where the color imparting module is specifically configured to:
different colors are given to the divided different areas so as to display the different color areas on the three-dimensional model;
the virtual try-on module 805 is specifically configured to:
and comparing the color area corresponding to the wearing area with the part shielding area.
In some embodiments, the second area determining module 804 is specifically configured to:
dividing each part of the target object in the image of the target object, and determining a part matched with the target clothes;
analyzing a part matched with the target clothes in the image of the target object to obtain part analysis information, wherein the part analysis information comprises at least one of part size information and size information of the part wearing clothes;
and determining a part shielding area of a part matched with the target clothes based on the part analysis information.
In some embodiments, the virtual try-on module 805 is specifically configured to:
If the comparison result is that the wearing area is smaller than the part shielding area, repairing a difference area between the wearing area and the part shielding area in the image of the target object to obtain a repaired image;
and acquiring the image of the target clothes, and fusing the image of the target clothes with the repaired image to realize virtual try-on of the target object.
In some embodiments, the virtual try-on module 805 is specifically configured to:
generating a background image in which the target object does not exist based on a background area in the image of the target object, and determining a local image in the background image of the difference area;
and replacing the image corresponding to the difference region in the image of the target object with the local image to obtain a repaired image.
As described above, the image acquisition module 801 acquires an image of a target object to be virtually tried on, and at least one garment is worn on the target object in the image of the target object. Then, the mesh reconstruction module 802 performs mesh reconstruction on the target object based on the image of the target object to obtain a three-dimensional model of the target object, the first region determining module 803 determines a wearing region corresponding to the target garment on the three-dimensional model based on the target garment to be tried on by the target object, and the second region determining module 804 determines a region shielding region of a region matching the target garment in the image of the target object. Finally, the wearing area and the part shielding area are compared through the virtual fitting module 805, and based on a comparison result, the virtual fitting of the target clothing is performed on the target object, so that the wearing area corresponding to the target clothing and the part shielding area of the target object are determined to be compared, virtual fitting is performed based on a comparison result, and the authenticity of the virtual clothing after virtual fitting is improved.
Correspondingly, the embodiment of the application also provides electronic equipment, which can be a terminal, and the terminal can be terminal equipment such as a smart phone, a tablet personal computer, a notebook computer, a touch screen, a game machine, a personal computer (PC, personal Computer), a personal digital assistant (Personal Digital Assistant, PDA) and the like. As shown in fig. 9, fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 900 includes a processor 901 having one or more processing cores, a memory 902 having one or more computer readable storage media, and a computer program stored on the memory 902 and executable on the processor. The processor 901 is electrically connected to the memory 902. It will be appreciated by those skilled in the art that the electronic device structure shown in the figures is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
Processor 901 is a control center of electronic device 900, connects various portions of the entire electronic device 900 using various interfaces and lines, and performs various functions of electronic device 900 and processes data by running or loading software programs and/or modules stored in memory 902, and invoking data stored in memory 902, thereby performing overall monitoring of electronic device 900.
In the embodiment of the present application, the processor 901 in the electronic device 900 loads the instructions corresponding to the processes of one or more application programs into the memory 902 according to the following steps, and the processor 901 executes the application programs stored in the memory 902, so as to implement various functions:
acquiring an image of a target object to be subjected to virtual try-on, wherein at least one garment is worn on the target object in the image of the target object;
based on the image of the target object, carrying out grid reconstruction on the target object to obtain a three-dimensional model of the target object;
determining a wearing area corresponding to the target clothing on the three-dimensional model based on the target clothing to be tried on by the target object;
determining a part shielding area of a part matched with the target clothes in the image of the target object;
and comparing the wearing area with the part shielding area, and performing virtual fitting of the target clothes on the target object based on a comparison result.
In some embodiments, the determining, based on the target apparel to be tried on by the target object, a wearing area corresponding to the target apparel on the three-dimensional model includes:
Acquiring attribute information of the target clothes;
and dividing regions on the three-dimensional model based on the attribute information of the target clothing, and determining the region corresponding to the divided target clothing as the wearing region.
In some embodiments, the attribute information includes size information and associated location information, and the reconstructing a mesh of the target object based on the image of the target object to obtain a three-dimensional model of the target object includes:
based on the image of the target object, carrying out grid reconstruction on the target object to obtain a plurality of three-dimensional vertex coordinates of the target object;
obtaining a three-dimensional model of the target object based on a plurality of the three-dimensional vertex coordinates;
the method for dividing the region on the three-dimensional model based on the attribute information of the target clothing, and determining the region corresponding to the divided target clothing as the wearing region includes:
determining a target three-dimensional vertex coordinate corresponding to the target clothing based on the size information and the associated position information of the target clothing;
and determining the area formed by the three-dimensional vertex coordinates of the target as the wearing area.
In some embodiments, after the region division on the three-dimensional model, the method further includes:
different colors are given to the divided different areas so as to display the different color areas on the three-dimensional model;
the comparing the wearing area with the part shielding area includes:
and comparing the color area corresponding to the wearing area with the part shielding area.
In some embodiments, determining a location occlusion region of a location matching the target apparel in the image of the target object includes:
dividing each part of the target object in the image of the target object, and determining a part matched with the target clothes;
analyzing a part matched with the target clothes in the image of the target object to obtain part analysis information, wherein the part analysis information comprises at least one of part size information and size information of the part wearing clothes;
and determining a part shielding area of a part matched with the target clothes based on the part analysis information.
In some embodiments, the performing the virtual fitting of the target apparel on the target object based on the comparison result includes:
If the comparison result is that the wearing area is smaller than the part shielding area, repairing a difference area between the wearing area and the part shielding area in the image of the target object to obtain a repaired image;
and acquiring the image of the target clothes, and fusing the image of the target clothes with the repaired image to realize virtual try-on of the target object.
In some embodiments, the repairing the difference region between the wearing region and the region shielding region in the image of the target object to obtain a repaired image includes:
generating a background image in which the target object does not exist based on a background area in the image of the target object, and determining a local image in the background image of the difference area;
and replacing the image corresponding to the difference region in the image of the target object with the local image to obtain a repaired image.
Thus, the electronic device 900 provided in this embodiment may have the following technical effects: the reality of virtual clothes after virtual try-on is improved.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 9, the electronic device 900 further includes: a touch display 903, a radio frequency circuit 904, an audio circuit 905, an input unit 906, and a power supply 907. The processor 901 is electrically connected to the touch display 903, the radio frequency circuit 904, the audio circuit 905, the input unit 906, and the power supply 907, respectively. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 9 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The touch display 903 may be used to display a graphical user interface and receive an operation instruction generated by a user acting on the graphical user interface. The touch display 903 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of the electronic device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a liquid crystal display (LCD, liquid Crystal Display), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 901, and can receive and execute commands sent from the processor 901. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 901 to determine the type of touch event, and the processor 901 then provides a corresponding visual output on the display panel based on the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display 903 to implement input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch display 903 may also implement an input function as part of the input unit 906.
The radio frequency circuit 904 may be configured to receive and transmit radio frequency signals to and from a network device or other electronic device via wireless communication to and from the network device or other electronic device.
The audio circuitry 905 may be used to provide an audio interface between a user and an electronic device through a speaker, microphone. The audio circuit 905 may transmit the received electrical signal converted from audio data to a speaker, and convert the electrical signal into a sound signal to output; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 905 and converted into audio data, which are processed by the audio data output processor 901 for transmission to, for example, another electronic device via the radio frequency circuit 904, or which are output to the memory 902 for further processing. The audio circuit 905 may also include an ear bud jack to provide communication of the peripheral headphones with the electronic device.
The input unit 906 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 907 is used to power the various components of the electronic device 900. Alternatively, the power supply 907 may be logically connected to the processor 901 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system. The power supply 907 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 9, the electronic device 900 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer readable storage medium having stored therein a plurality of computer programs that can be loaded by a processor to perform steps in any of the virtual try-on methods provided by embodiments of the present application. For example, the computer program may perform the steps of:
Acquiring an image of a target object to be subjected to virtual try-on, wherein at least one garment is worn on the target object in the image of the target object;
based on the image of the target object, carrying out grid reconstruction on the target object to obtain a three-dimensional model of the target object;
determining a wearing area corresponding to the target clothing on the three-dimensional model based on the target clothing to be tried on by the target object;
determining a part shielding area of a part matched with the target clothes in the image of the target object;
and comparing the wearing area with the part shielding area, and performing virtual fitting of the target clothes on the target object based on a comparison result.
In some embodiments, the determining, based on the target apparel to be tried on by the target object, a wearing area corresponding to the target apparel on the three-dimensional model includes:
acquiring attribute information of the target clothes;
and dividing regions on the three-dimensional model based on the attribute information of the target clothing, and determining the region corresponding to the divided target clothing as the wearing region.
In some embodiments, the attribute information includes size information and associated location information, and the reconstructing a mesh of the target object based on the image of the target object to obtain a three-dimensional model of the target object includes:
Based on the image of the target object, carrying out grid reconstruction on the target object to obtain a plurality of three-dimensional vertex coordinates of the target object;
obtaining a three-dimensional model of the target object based on a plurality of the three-dimensional vertex coordinates;
the method for dividing the region on the three-dimensional model based on the attribute information of the target clothing, and determining the region corresponding to the divided target clothing as the wearing region includes:
determining a target three-dimensional vertex coordinate corresponding to the target clothing based on the size information and the associated position information of the target clothing;
and determining the area formed by the three-dimensional vertex coordinates of the target as the wearing area.
In some embodiments, after the region division on the three-dimensional model, the method further includes:
different colors are given to the divided different areas so as to display the different color areas on the three-dimensional model;
the comparing the wearing area with the part shielding area includes:
and comparing the color area corresponding to the wearing area with the part shielding area.
In some embodiments, determining a location occlusion region of a location matching the target apparel in the image of the target object includes:
Dividing each part of the target object in the image of the target object, and determining a part matched with the target clothes;
analyzing a part matched with the target clothes in the image of the target object to obtain part analysis information, wherein the part analysis information comprises at least one of part size information and size information of the part wearing clothes;
and determining a part shielding area of a part matched with the target clothes based on the part analysis information.
In some embodiments, the performing the virtual fitting of the target apparel on the target object based on the comparison result includes:
if the comparison result is that the wearing area is smaller than the part shielding area, repairing a difference area between the wearing area and the part shielding area in the image of the target object to obtain a repaired image;
and acquiring the image of the target clothes, and fusing the image of the target clothes with the repaired image to realize virtual try-on of the target object.
In some embodiments, the repairing the difference region between the wearing region and the region shielding region in the image of the target object to obtain a repaired image includes:
Generating a background image in which the target object does not exist based on a background area in the image of the target object, and determining a local image in the background image of the difference area;
and replacing the image corresponding to the difference region in the image of the target object with the local image to obtain a repaired image.
It can be seen that the computer program can be loaded by a processor to perform the steps in any of the virtual try-on methods provided in the embodiments of the present application, thereby bringing about the following technical effects: the reality of virtual clothes after virtual try-on is improved.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the computer-readable storage medium may comprise: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
Because the computer program stored in the computer readable storage medium may execute the steps in any one of the virtual fitting methods provided in the embodiments of the present application, the beneficial effects that any one of the virtual fitting methods provided in the embodiments of the present application may achieve are detailed in the previous embodiments and are not described herein.
The foregoing describes in detail a virtual fitting method, apparatus, electronic device, and computer readable storage medium provided in the embodiments of the present application, and specific examples are applied to illustrate the principles and embodiments of the present application, where the foregoing examples are only used to help understand the method and core idea of the present application; meanwhile, those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, and the present description should not be construed as limiting the present application in view of the above.
Claims (10)
1. A virtual try-on method, the method comprising:
acquiring an image of a target object to be subjected to virtual try-on, wherein at least one garment is worn on the target object in the image of the target object;
based on the image of the target object, carrying out grid reconstruction on the target object to obtain a three-dimensional model of the target object;
determining a wearing area corresponding to the target clothing on the three-dimensional model based on the target clothing to be tried on by the target object;
determining a part shielding area of a part matched with the target clothes in the image of the target object;
And comparing the wearing area with the part shielding area, and performing virtual fitting of the target clothing on the target object based on a comparison result.
2. The virtual fit method of claim 1, wherein the determining a wearing area corresponding to the target apparel on the three-dimensional model based on the target apparel to be fit by the target object comprises:
acquiring attribute information of the target clothes;
and carrying out region division on the three-dimensional model based on the attribute information of the target clothing, and determining the region corresponding to the divided target clothing as the wearing region.
3. The virtual fitting method according to claim 2, wherein the attribute information includes size information and associated location information, and the mesh reconstruction is performed on the target object based on the image of the target object to obtain the three-dimensional model of the target object, including:
based on the image of the target object, carrying out grid reconstruction on the target object to obtain a plurality of three-dimensional vertex coordinates of the target object;
obtaining a three-dimensional model of the target object based on a plurality of three-dimensional vertex coordinates;
The step of dividing the region on the three-dimensional model based on the attribute information of the target clothing, and determining the region corresponding to the divided target clothing as the wearing region comprises the following steps:
determining a target three-dimensional vertex coordinate corresponding to the target clothing based on the size information and the associated position information of the target clothing;
and determining the area formed by the three-dimensional vertex coordinates of the target as the wearing area.
4. The virtual fitting method according to claim 2, further comprising, after the dividing of the region on the three-dimensional model:
assigning different colors to the divided different regions to display the different color regions on the three-dimensional model;
the comparing the wearing area with the part shielding area comprises:
and comparing the color area corresponding to the wearing area with the part shielding area.
5. The virtual try-on method of claim 1 wherein the determining, in the image of the target object, a region occlusion region for a region matching the target apparel comprises:
dividing each part of the target object in the image of the target object, and determining a part matched with the target clothes;
Analyzing a part matched with the target clothes in the image of the target object to obtain part analysis information, wherein the part analysis information comprises at least one of part size information and size information of the part wearing clothes;
and determining a part shielding area of a part matched with the target clothes based on the part analysis information.
6. The virtual try-on method of any one of claims 1 to 5, wherein the performing the virtual try-on of the target apparel on the target object based on the comparison result includes:
if the comparison result shows that the wearing area is smaller than the part shielding area, repairing a difference area between the wearing area and the part shielding area in the image of the target object to obtain a repaired image;
and acquiring the image of the target clothes, and fusing the image of the target clothes with the repaired image to realize virtual try-on of the target object.
7. The virtual fitting method according to claim 6, wherein the repairing the difference region between the wearing region and the region shielding region in the image of the target object to obtain the repaired image includes:
Generating a background image without the target object based on a background area in the image of the target object, and determining a local image of the difference area in the background image;
and replacing the image corresponding to the difference region in the image of the target object with the local image to obtain a repaired image.
8. A virtual try-on device, the device comprising:
the image acquisition module is used for acquiring an image of a target object to be subjected to virtual try-on, wherein at least one garment is worn on the target object in the image of the target object;
the grid reconstruction module is used for carrying out grid reconstruction on the target object based on the image of the target object to obtain a three-dimensional model of the target object;
the first area determining module is used for determining a wearing area corresponding to the target clothing on the three-dimensional model based on the target clothing to be tried on by the target object;
the second area determining module is used for determining a part shielding area of a part matched with the target clothes in the image of the target object;
and the virtual try-on module is used for comparing the wearing area with the part shielding area and carrying out virtual try-on of the target clothing on the target object based on a comparison result.
9. An electronic device comprising a processor and a memory, the memory storing a plurality of instructions; the processor loads instructions from the memory to perform the steps in the virtual try-in method of any of claims 1 to 7.
10. A computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps in the virtual try-on method of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311510825.3A CN117523142A (en) | 2023-11-13 | 2023-11-13 | Virtual fitting method, virtual fitting device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311510825.3A CN117523142A (en) | 2023-11-13 | 2023-11-13 | Virtual fitting method, virtual fitting device, electronic equipment and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117523142A true CN117523142A (en) | 2024-02-06 |
Family
ID=89762062
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311510825.3A Pending CN117523142A (en) | 2023-11-13 | 2023-11-13 | Virtual fitting method, virtual fitting device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117523142A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114004669A (en) * | 2021-10-08 | 2022-02-01 | 深圳Tcl新技术有限公司 | Data processing method, device and computer readable storage medium |
CN114758109A (en) * | 2022-05-20 | 2022-07-15 | 深圳市镭神智能系统有限公司 | Virtual fitting method and system, and method for providing virtual fitting information |
CN115358828A (en) * | 2022-10-14 | 2022-11-18 | 阿里巴巴(中国)有限公司 | Information processing and interaction method, device, equipment and medium based on virtual fitting |
CN116452601A (en) * | 2022-01-07 | 2023-07-18 | 青岛海尔科技有限公司 | Virtual fitting method, virtual fitting device, electronic equipment and storage medium |
-
2023
- 2023-11-13 CN CN202311510825.3A patent/CN117523142A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114004669A (en) * | 2021-10-08 | 2022-02-01 | 深圳Tcl新技术有限公司 | Data processing method, device and computer readable storage medium |
CN116452601A (en) * | 2022-01-07 | 2023-07-18 | 青岛海尔科技有限公司 | Virtual fitting method, virtual fitting device, electronic equipment and storage medium |
CN114758109A (en) * | 2022-05-20 | 2022-07-15 | 深圳市镭神智能系统有限公司 | Virtual fitting method and system, and method for providing virtual fitting information |
CN115358828A (en) * | 2022-10-14 | 2022-11-18 | 阿里巴巴(中国)有限公司 | Information processing and interaction method, device, equipment and medium based on virtual fitting |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109427083B (en) | Method, device, terminal and storage medium for displaying three-dimensional virtual image | |
CN108492363B (en) | Augmented reality-based combination method and device, storage medium and electronic equipment | |
CN113052947B (en) | Rendering method, rendering device, electronic equipment and storage medium | |
CN112233211B (en) | Animation production method, device, storage medium and computer equipment | |
CN112991494B (en) | Image generation method, device, computer equipment and computer readable storage medium | |
CN113546411B (en) | Game model rendering method, device, terminal and storage medium | |
CN112465945B (en) | Model generation method and device, storage medium and computer equipment | |
CN113538696A (en) | Special effect generation method and device, storage medium and electronic equipment | |
CN114742925A (en) | Covering method and device for virtual object, electronic equipment and storage medium | |
CN116797631A (en) | Differential area positioning method, differential area positioning device, computer equipment and storage medium | |
CN113487662B (en) | Picture display method and device, electronic equipment and storage medium | |
CN110622218A (en) | Image display method, device, storage medium and terminal | |
CN113539439A (en) | Medical image processing method and device, computer equipment and storage medium | |
CN117455753A (en) | Special effect template generation method, special effect generation device and storage medium | |
CN117523142A (en) | Virtual fitting method, virtual fitting device, electronic equipment and computer readable storage medium | |
CN113362435B (en) | Virtual component change method, device, equipment and medium of virtual object model | |
CN114742970A (en) | Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device | |
CN113554760B (en) | Method and device for changing package, computer equipment and storage medium | |
CN113350792A (en) | Contour processing method and device for virtual model, computer equipment and storage medium | |
CN108876498B (en) | Information display method and device | |
CN117523136B (en) | Face point position corresponding relation processing method, face reconstruction method, device and medium | |
CN114004922B (en) | Bone animation display method, device, equipment, medium and computer program product | |
CN117830081A (en) | Dressing generation method, device and equipment for virtual object and readable storage medium | |
CN115731339A (en) | Virtual model rendering method and device, computer equipment and storage medium | |
CN117689780A (en) | Animation generation method and device of virtual model, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |