CN117710581A - Virtual human clothing generation method, device, equipment and medium - Google Patents

Virtual human clothing generation method, device, equipment and medium Download PDF

Info

Publication number
CN117710581A
CN117710581A CN202311734905.7A CN202311734905A CN117710581A CN 117710581 A CN117710581 A CN 117710581A CN 202311734905 A CN202311734905 A CN 202311734905A CN 117710581 A CN117710581 A CN 117710581A
Authority
CN
China
Prior art keywords
clothing
model
virtual
garment
virtual person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311734905.7A
Other languages
Chinese (zh)
Inventor
陈亚军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202311734905.7A priority Critical patent/CN117710581A/en
Publication of CN117710581A publication Critical patent/CN117710581A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application provides a virtual person clothing generation method, device, equipment and medium, and relates to the technical field of artificial intelligence, wherein the method comprises the following steps: receiving target information input by a user, wherein the target information carries clothing feature information for representing clothing features of a virtual person; inputting clothing feature information corresponding to the target information into an artificial intelligent AI model obtained through training in advance, and obtaining a virtual person clothing model based on the output of the AI model; and rendering the virtual person clothing model to obtain the virtual person clothing. According to the method and the device for generating the virtual clothing, the virtual clothing generation effect can be improved.

Description

Virtual human clothing generation method, device, equipment and medium
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a virtual person clothing generation method, device, equipment and medium.
Background
In the related art, when the replacement of the virtual person is realized, the adopted scheme is to match based on the input of the user, for example, according to the information input by the user or according to the information of the surrounding environment where the user is located, and the like, an existing virtual person garment is matched from the material library, and then the virtual person garment is decorated on the virtual person. The user can select and decorate the clothing that the service provider makes in advance through 3D asset making tool, and is limited in style and style, leads to virtual person clothing to produce the effect relatively poor.
Disclosure of Invention
The embodiment of the application provides a virtual person clothing generation method, device, equipment and medium, so as to solve the problem that the existing virtual person clothing generation effect is poor.
In order to solve the technical problems, the application is realized in the following way:
in a first aspect, an embodiment of the present application provides a method for generating a virtual person garment, the method including:
receiving target information input by a user, wherein the target information carries clothing feature information for representing clothing features of a virtual person;
inputting clothing feature information corresponding to the target information into an artificial intelligent AI model obtained through training in advance, and obtaining a virtual person clothing model based on the output of the AI model;
and rendering the virtual person clothing model to obtain the virtual person clothing.
In a second aspect, embodiments of the present application provide a virtual person garment generating apparatus, the apparatus including:
the receiving module is used for receiving target information input by a user, wherein the target information carries clothing characteristic information for representing clothing characteristics of the virtual person;
the acquisition module is used for inputting the clothing feature information corresponding to the target information into an artificial intelligent AI model which is obtained through training in advance, and obtaining a virtual person clothing model based on the output of the AI model;
and the rendering module is used for rendering the virtual person clothing model to obtain the virtual person clothing.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the virtual person garment generation method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the virtual person garment generation method according to the first aspect.
In the embodiment of the application, receiving target information input by a user, wherein the target information carries clothing feature information for representing clothing features of a virtual person; inputting clothing feature information corresponding to the target information into an artificial intelligent AI model obtained through training in advance, and obtaining a virtual person clothing model based on the output of the AI model; and rendering the virtual person clothing model to obtain the virtual person clothing. Therefore, the virtual person clothing can be obtained through the clothing feature information representing the clothing features of the virtual person carried in the target information input by the user, the generated virtual person clothing can be changed in style and style according to the input of the user, the variety is provided, and the virtual person clothing generation effect can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for generating a virtual garment according to an embodiment of the present application;
FIG. 2 is one of the block diagrams of an electronic device provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a retrofit provided in an embodiment of the present application;
fig. 4 is a block diagram of a virtual garment generating apparatus according to an embodiment of the present application;
fig. 5 is a second block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The method, the device, the equipment and the medium for generating the virtual clothes provided by the embodiment of the application are described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart of a method for generating a virtual person garment according to an embodiment of the present application, as shown in fig. 1, the method includes the following steps:
step 101, receiving target information input by a user, wherein the target information carries clothing feature information for representing clothing features of the virtual person.
Wherein the virtual person may comprise a digital person. The target information can carry descriptive text information, and the descriptive text information is used for describing the clothing feature information; or, the target information may carry a target image, where the target image includes a garment, and the garment includes garment feature information.
Optionally, the target information carries descriptive text information, and the descriptive text information is used for describing the clothing feature information;
the inputting the clothing feature information corresponding to the target information into an artificial intelligence (Artificial Intelligence, AI) model obtained by training in advance comprises the following steps:
identifying and extracting the descriptive text information from the target information;
and inputting the descriptive text information into an AI model obtained by training in advance.
In this embodiment, the target information carries descriptive text information, the descriptive text information is used to describe the clothing feature information, the descriptive text information is identified and extracted from the target information, the descriptive text information is input into an AI model obtained by training in advance, a virtual person clothing model is obtained based on output of the AI model, rendering processing is performed on the virtual person clothing model, and a virtual person clothing is obtained, so that the virtual person clothing can be generated based on the descriptive text information input by a user, the generated virtual person clothing can be changed in style and style according to input of the user, and the virtual person clothing generation effect can be improved.
Optionally, the target information carries a target image, and the target image contains clothing;
inputting the clothing feature information corresponding to the target information into an AI model obtained by training in advance, wherein the method comprises the following steps:
identifying the target image, and acquiring explicit data and implicit data of a garment in the target image, wherein the garment characteristic information comprises the explicit data and the implicit data of the garment;
and inputting the explicit data and the implicit data of the clothing into a pre-trained AI model.
In this embodiment, the target information carries a target image, the target image includes a garment, the target image is identified, explicit data and implicit data of the garment in the target image are obtained, the garment feature information includes explicit data and implicit data of the garment, the explicit data and implicit data of the garment are input into an AI model obtained by training in advance, a virtual person garment model is obtained based on output of the AI model, and rendering processing is performed on the virtual person garment model to obtain a virtual person garment, so that the virtual person garment can be generated based on an image input by a user, the generated virtual person garment can be changed according to input of the user in style and style, and the virtual person garment generation effect can be improved.
It should be noted that, the method may receive an instruction for describing clothing features input by a user, or receive a picture for representing clothing features uploaded by the user, so as to obtain clothing feature information of clothing defined by the user according to the instruction or the picture; inputting the clothing feature information into an AI model obtained by training in advance to generate a virtual person clothing model; rendering the generated virtual person clothing model to obtain virtual person clothing required by the user, and enabling the user to replace the virtual person clothing.
If the instruction for describing the clothing features is received, identifying a prompting word for describing the clothing features by a user in the instruction, and taking the prompting word as clothing feature information; if the picture which is uploaded by the user and used for reflecting the clothing characteristics is received, identifying the picture to obtain explicit data and implicit data of the clothing, wherein the explicit data and the implicit data are used as clothing characteristic information; wherein, the explicit data includes kinds, materials, colors, etc. of the clothes, and the implicit data includes posture, shape, and depth information of the clothes.
In addition, garments may also be described as garments.
And 102, inputting the clothing feature information corresponding to the target information into an artificial intelligent AI model obtained through training in advance, and obtaining a virtual person clothing model based on the output of the AI model.
The target information can carry descriptive text information, the descriptive text information is used for describing the clothing feature information, and the clothing feature information corresponding to the target information can comprise clothing feature information described by the descriptive text information; or, the target information may carry a target image, where the target image includes a garment, and the garment includes garment feature information, and the garment feature information corresponding to the target information may include the garment feature information included in the target image.
Optionally, inputting the clothing feature information corresponding to the target information into a pre-trained AI model, and obtaining a virtual person clothing model based on the output of the AI model, including:
inputting the clothing feature information corresponding to the target information into an AI model obtained through training in advance, and outputting a 3D model of the virtual clothing and a texture map of the virtual clothing;
and obtaining a virtual person clothing model based on the 3D model of the virtual person clothing and the texture map of the virtual person clothing.
In this embodiment, the garment feature information corresponding to the target information is input into the AI model obtained by training in advance, the 3D model of the virtual garment and the texture map of the virtual garment are output, the virtual garment model is obtained based on the 3D model of the virtual garment and the texture map of the virtual garment, and the virtual garment is obtained by performing rendering processing on the virtual garment model, so that the 3D model of the virtual garment and the texture map of the virtual garment can be obtained through the garment feature information carried in the target information input by the user, and further the virtual garment model is obtained, and the virtual garment is generated by the virtual garment model, and the generated virtual garment can be varied according to the input of the user in style and style, so that the virtual garment generation effect can be improved.
Optionally, the obtaining the virtual person clothing model based on the 3D model of the virtual person clothing and the texture map of the virtual person clothing includes:
performing mapping processing of clothing textures on the texture mapping of the virtual clothing based on the 3D model of the virtual clothing so as to perform texture mapping on the 3D model of the virtual clothing;
and determining the 3D model of the virtual person clothing after the mapping processing as a virtual person clothing model.
In this embodiment, the mapping process of the clothing texture is performed on the texture map of the virtual clothing based on the 3D model of the virtual clothing, so as to perform the texture map on the 3D model of the virtual clothing, and the 3D model of the virtual clothing after the mapping process is determined as the virtual clothing model, so that the mapping of the texture map of the virtual clothing can be realized.
Optionally, the mapping processing of the garment texture on the texture map of the virtual person garment based on the 3D model of the virtual person garment includes:
projecting each point on the 3D model surface of the virtual human garment to a 2D plane to obtain a projection area of the 2D plane;
and matching the coordinates of the texture map of the virtual human clothing with the projection area of the 2D plane so as to map the clothing texture of the texture map of the virtual human clothing.
It should be noted that after the 3D model of the virtual garment is adapted and adjusted, each point on the surface of the 3D model is projected onto a 2D plane, so that the texture map coordinates of the 2D virtual garment completely coincide with the 2D plane obtained by projection, and the generated texture map of the virtual garment is completely matched with the 3D garment model.
In this embodiment, each point on the 3D model surface of the virtual person garment is projected to a 2D plane to obtain a projection area of the 2D plane, coordinates of the texture map of the virtual person garment are matched with the projection area of the 2D plane, so as to map the texture of the virtual person garment, and the 3D model of the virtual person garment after the mapping process is determined as a virtual person garment model, so that adaptation of the generated garment texture of the virtual person garment and the 3D model of the virtual person garment is realized, and the generated virtual person garment can be more coordinated and real.
Optionally, before the mapping processing of the garment texture is performed on the texture map of the virtual person garment based on the 3D model of the virtual person garment, the method further includes:
and adapting and adjusting the 3D model of the virtual person clothing based on the predefined human body structure data of the virtual person.
Wherein, the human body structure data of the predefined virtual human can comprise skeleton structure, human body model UV coordinates, triangle surface vertex coordinates, normal data and the like. The adaptation AI model of the virtual garment obtained by deep learning training can be adopted, and the adaptation adjustment is performed on the 3D model of the virtual garment through predefined human body structure data of the virtual garment, so that the adaptation of the 3D model of the virtual garment to real human body structure data is realized, and illustratively, seat coordinates of left and right shoulders of the 3D model of the virtual garment, upper and lower body boundary coordinates, waistline coordinates and the like can be respectively adapted to real human body structures from dimensions such as shoulder width, upper body length, waistline, chest circumference, leg length, arm length and the like.
In this embodiment, the 3D model of the virtual person garment is adapted based on predefined human body structure data of the virtual person, the mapping process of the garment texture is performed on the texture map of the virtual person garment based on the 3D model of the adjusted virtual person garment, and the 3D model of the virtual person garment after the mapping process is determined as the virtual person garment model, so that the finally generated virtual person garment model and the virtual person can be relatively matched.
And 103, rendering the virtual person clothing model to obtain the virtual person clothing.
Wherein the rendering process may be a preview rendering process, or a full rendering process. The preview rendering process can be to render and display the generated 3D model of the virtual human clothing and the texture map of the virtual human clothing respectively; the complete rendering process may be rendering display of the virtual person clothing model obtained by mapping the generated 3D model of the virtual person clothing and the texture map of the virtual person clothing.
It should be noted that the generated virtual person clothing may be used for one-key replacement, and specifically, the generated virtual person clothing may be used by a user to replace a virtual person with the virtual person clothing.
According to the method and the device for changing the virtual person, corresponding virtual person clothing can be generated through the AI model obtained through pre-training according to user-defined requirements of the virtual person clothing input by a user, so that the user can change the virtual person, the virtual person image is decorated, and the participation, immersion and recognition of the user are improved. Moreover, the user-defined requirement of the user on the virtual clothing can be a text instruction for describing the required virtual clothing, or a picture capable of reflecting the required virtual clothing characteristics, and the user can select an input mode according to actual requirements, so that the flexibility is high, and the requirements of different users are met.
The virtual clothing is obtained through clothing feature information representing clothing features of the virtual person carried in target information input by a user, personalized requirements of various users on the virtual person clothing in different styles are met, and participation, immersion and identity of the user in the virtual person changing process are improved.
In the related art, the basic ways to realize the replacement of the virtual person are based on matching according to the input of the user, and the following two methods are approximately adopted: firstly, matching the existing virtual person clothing in a material library according to the information of a user or the information of the surrounding environment of the user, and then decorating the virtual person clothing; the second is that the user has some virtual clothes lists with different styles on the interface for decorating the virtual human image of the user, and the user can select favorite clothes from the lists and then decorate the favorite clothes on the virtual human body of the user.
Both of the above methods achieve the simple virtual person changing requirements, but at the same time have certain limitations, for example, the selectable and decoratable garments are manufactured in advance by the service provider through the 3D asset manufacturing tool, and therefore are limited in style and style. If many styles and styles are to be supported, a lot of time and manpower are spent on making, and the production is still limited. In addition, the personalized requirements of users cannot be fully met by matching the material library, and the matching cannot be completely performed according to the will of all users.
In order to solve the problem of 3D asset manufacturing efficiency and break through a matching material library to meet personalized dressing requirements of each user, the embodiment of the application provides an AI-based virtual one-key changing method. According to the method and the device for manufacturing the virtual clothing 3D asset, the problems that virtual clothing 3D asset manufacturing efficiency is low, virtual clothing style diversity is insufficient and the like can be solved, and meanwhile personalized requirements of each user on virtual images can be met rapidly, so that participation, immersion and recognition of the users are improved.
In the embodiment of the application, receiving target information input by a user, wherein the target information carries clothing feature information for representing clothing features of a virtual person; inputting clothing feature information corresponding to the target information into an artificial intelligent AI model obtained through training in advance, and obtaining a virtual person clothing model based on the output of the AI model; and rendering the virtual person clothing model to obtain the virtual person clothing. Therefore, the virtual person clothing can be obtained through the clothing feature information representing the clothing features of the virtual person carried in the target information input by the user, the generated virtual person clothing can be changed in style and style according to the input of the user, the variety is provided, and the virtual person clothing generation effect can be improved.
As a specific embodiment, the virtual garment generating method may be applied to an electronic device, as shown in fig. 2, where the electronic device includes a user application module, a user input data processing module, an AI capability module, an adaptation module, and a rendering module. In this embodiment, a virtual artificial digital person is described as an example.
(1) User application module
The user application module is an interface display module of user interaction operation and is mainly a 3D application interaction interface developed based on rendering engines such as Unity or UE. The interface of the user interaction operation mainly comprises: a user digital human figure display interface, a user data input interface, and other operation function interfaces. The user mainly enters a one-key reloading interface from an entrance in the module, then inputs a description instruction or uploads a reference picture and the like through an input box to finish data input, and submits the data to the user input data processing module.
In addition, after the generated 3D model or texture is confirmed by the user, the generated 3D model or texture is displayed to the user after being rendered by the rendering module, and can be used for other operations, such as: the results of this time may be saved for digital portrait collocation, such as scoring and evaluation of the generated 3D model and texture overall effect.
(2) User input data processing module
The digital person one-key changing method in the embodiment is input depending on data or instructions of a user, and the information input by the user can include descriptive text information, for example, a section of text describing elements such as clothing patterns, fabrics, patterns and the like; alternatively, the information entered by the user may be a picture, for example, a picture that the user likes to include a clothing style. For different inputs, the user may be invoked to input different processing units of the data processing module.
If the user inputs a section of descriptive text, a prompt word processing unit is called, and the user inputs the prompt word, so that the pretreatment of the prompt word is carried out according to the input rule of the AI generating module, and the processing process is as follows: the input descriptions are: "I want a red-white lattice shirt, cotton-flax long-sleeve shirt", the output after pretreatment is: "cotton and hemp materials, long-sleeved shirts, red and white grids". And then carry out modifier supplement based on the output after pretreatment for the prompt word is more complete, and the prompt word after modification is: "cotton and hemp materials, long-sleeved shirts, red and white grids, rich details, high resolution, high Definition, 8K, high Definition (HD), cloth texture".
If the user inputs a picture, invoking a picture understanding analysis unit, firstly identifying and analyzing the picture to determine whether the picture contains referenceable clothing content, and if not, prompting the user to modify the picture; if so, further identification is performed to generate data of the AI generation module, and the identification generates the following two types of data:
a) Explicit data, such as: type of clothing: a jacket, a long-sleeved shirt; clothing material: cotton and hemp materials; clothing color: red and white grids; details of the garment: the buttons are white transparent small buttons and square small lattices; etc.
b) Implicit data: and identifying the information such as the posture and the shape, the depth and the like of the clothing in the picture through an automatic decoder of the identification module. Depth refers to the depth information of the garment in the image.
And then, based on the identified clothing feature information, sending the clothing feature information to an AI generation module for generating the digital person clothing.
(3) AI capability module
The AI capability module mainly comprises the following subunits: the system comprises an AI large model training unit, an AI large model service unit and an AI generation unit. The AI large model service unit at least has large model capability for supporting the generation of the 3D model and large model capability for the generation of the clothing texture, so that the AI large model service unit can be supported to generate the 3D model and the clothing texture map.
The 3D model generation and garment texture generation large model is a pre-training large model obtained by training a specific pre-processed corresponding data set through an AI large model training unit. The AI generation unit is used for packaging the large model service, after the user inputs the instruction and the data, the user input is transmitted to the large model capacity through the unit, and after the content is generated, the result is output to the calling party through the unit.
Besides the large model pre-training, the AI large model training unit is also used for training the large model by taking the input prompt words corresponding to the forward feedback and the content generated by the AI as training data through the feedback of the user on the generated content in the use process of the user, so that the capability of the large model is continuously trained and evolved along with the use of the user.
(4) Adapter module
The 3D model and the clothing texture map generated by the AI generation unit can be subjected to adaptation and adjustment through an adaptation module, and then the assembly and the display of the generated materials and the digital person are completed.
The adaptation module is mainly based on digital human body structure data predefined by the digital human body bare die, such as: bone structure, human body model UV coordinates, triangular face vertex coordinates, normal data, etc., the digital human clothing obtained through the training of deep learning adapts to the AI model, and this AI adaptation model adapts to the digital human clothing that generates through the digital human's body structure data and adjusts, mainly adjusts corresponding coordinates, for example: seat coordinates of the left and right shoulders, upper and lower body boundary coordinates, waistline coordinates, etc. By adapting the adjustment to correspond to real world data, for example: the 3D model of the generated data human clothing is combined with the digital data to be processed, and finally the 3D model is adapted to the digital human model, namely, the assembly is completed, and meanwhile, the 3D model and the bone data are bound, so that the effect of combining the 3D model with the digital human body assembly is achieved.
After the 3D model adaptation is completed, the generated texture map (e.g., PBR texture map) of the digital human garment may be mapped to the garment texture based on the UV coordinates of the 3D garment. Texture mapping may be performed using tri-linear interpolation (texture 3D). Specifically, each point on the surface of the 3D model can be projected onto a 2D plane, and then the generated 2D texture map is calculated through the 2D plane obtained based on 3D projection calculation, so that the 2D texture coordinates are matched with the 2D plane, the coordinates of the 3D model can be perfectly attached to the 3D model, and further mapping of the texture map based on the clothing model is completed.
(5) Rendering module
The rendering module is divided into a preview rendering module and a complete rendering module. The preview rendering module is used for rendering and displaying the generated 3D model and the clothing texture effect independently, and generally does not need to be subjected to adaptation processing through the adaptation module. The preview rendering module can perform operations such as 3D model effect display and interactive drag, and can view the 3D model in 360 degrees. The preview rendering module can also render and display the clothing texture in a mode of unfolding a piece of cloth through a plane.
In addition, the complete rendering module is triggered by a reloading button arranged in the interactive interface, and is used for completely assembling, binding and mapping assets such as a clothing model and a texture map generated by AI with digital people after being processed by an adaptation mode, and finally rendering and displaying the whole effect by a rendering engine. In addition, the user can dynamically adjust the model position, the texture size, the position and the like in a parameterized manner based on the rendering effect.
After the user confirms the storage, the effect configuration parameters of the rendering presentation are stored, and further stored together as the assets of the digital person. The assets of the digital person can be checked and used in the user application module, and the one-key reloading effect of the digital person of the user is finally realized. Fig. 3 is a schematic diagram of a one-key replacement.
The digital person-to-person one-key replacement based on the AI can be realized based on the modules, the problems of low manufacturing efficiency of the digital person clothing 3D asset, insufficient style diversity of the digital person clothing and the like are solved, and meanwhile, the personalized requirements of each user on the digital person image can be met rapidly, so that the participation, immersion and recognition of the users are improved.
The embodiment of the application also provides a virtual person clothing generation device. Referring to fig. 4, fig. 4 is a block diagram of a virtual garment generating apparatus according to an embodiment of the present application. As shown in fig. 4, the virtual person clothing generating apparatus 200 includes:
a receiving module 201, configured to receive target information input by a user, where the target information carries clothing feature information for characterizing clothing features of a virtual person;
an obtaining module 202, configured to input garment feature information corresponding to the target information into an artificial intelligent AI model obtained by training in advance, and obtain a virtual person garment model based on output of the AI model;
and the rendering module 203 is configured to perform rendering processing on the virtual person clothing model to obtain a virtual person clothing.
Optionally, the target information carries descriptive text information, and the descriptive text information is used for describing the clothing feature information;
the acquisition module is specifically configured to:
identifying and extracting the descriptive text information from the target information;
inputting the descriptive text information into an AI model obtained by training in advance;
and obtaining a virtual human clothing model based on the output of the AI model.
Optionally, the target information carries a target image, and the target image contains clothing;
the acquisition module is specifically configured to:
identifying the target image, and acquiring explicit data and implicit data of a garment in the target image, wherein the garment characteristic information comprises the explicit data and the implicit data of the garment;
inputting the explicit data and the implicit data of the clothing into an AI model obtained by pre-training;
and obtaining a virtual human clothing model based on the output of the AI model.
Optionally, the acquiring module includes:
the input unit is used for inputting the clothing feature information corresponding to the target information into the AI model obtained through training in advance and outputting the 3D model of the virtual clothing and the texture map of the virtual clothing;
and the acquisition unit is used for acquiring the virtual person clothing model based on the 3D model of the virtual person clothing and the texture map of the virtual person clothing.
Optionally, the acquiring unit includes:
the mapping subunit is used for mapping the clothing texture on the basis of the 3D model of the virtual clothing, so as to carry out texture mapping on the 3D model of the virtual clothing;
and the determining subunit is used for determining the 3D model of the virtual person clothing after the mapping processing as a virtual person clothing model.
Optionally, the mapping subunit is specifically configured to:
projecting each point on the 3D model surface of the virtual human garment to a 2D plane to obtain a projection area of the 2D plane;
and matching the coordinates of the texture map of the virtual human clothing with the projection area of the 2D plane so as to map the clothing texture of the texture map of the virtual human clothing.
Optionally, the apparatus further comprises:
and the adjusting module is used for carrying out adaptation adjustment on the 3D model of the virtual person clothing based on the predefined human body structure data of the virtual person.
According to the method and the device for generating the virtual clothing, the virtual clothing is obtained through the clothing feature information representing the clothing features of the virtual person carried in the target information input by the user, the generated virtual clothing can be changed in style and style according to the input of the user, the variety is achieved, and the virtual clothing generation effect can be improved.
The virtual garment generating apparatus 200 in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., and embodiments of the present application are not particularly limited.
The virtual person garment generating apparatus 200 in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The virtual person garment generating apparatus 200 provided in the embodiment of the present application can implement each process implemented by the embodiment of the method described in fig. 1, and in order to avoid repetition, a detailed description is omitted herein.
The embodiment of the application also provides electronic equipment. Referring to fig. 5, fig. 5 is a block diagram of an electronic device according to an embodiment of the present application, as shown in fig. 5, including: processor 300, memory 320, and program or instructions stored on memory 320 and executable on processor 300, processor 300 for reading the program or instructions in memory 320; the electronic device also includes a bus interface and transceiver 310.
A transceiver 310 for receiving and transmitting data under the control of the processor 300.
Wherein in fig. 5, a bus architecture may comprise any number of interconnected buses and bridges, and in particular, one or more processors represented by processor 300 and various circuits of memory represented by memory 320, linked together. The bus architecture may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., which are well known in the art and, therefore, will not be described further herein. The bus interface provides an interface. Transceiver 310 may be a number of elements, including a transmitter and a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 300 is responsible for managing the bus architecture and general processing, and the memory 320 may store data used by the processor 300 in performing operations.
The processor 300 is configured to read a program or an instruction in the memory 320, and perform the following steps:
receiving target information input by a user, wherein the target information carries clothing feature information for representing clothing features of a virtual person;
inputting clothing feature information corresponding to the target information into an artificial intelligent AI model obtained through training in advance, and obtaining a virtual person clothing model based on the output of the AI model;
and rendering the virtual person clothing model to obtain the virtual person clothing.
Optionally, the target information carries descriptive text information, and the descriptive text information is used for describing the clothing feature information;
the processor 300 is configured to perform the inputting, into the AI model trained in advance, the garment feature information corresponding to the target information, and includes:
identifying and extracting the descriptive text information from the target information;
and inputting the descriptive text information into an AI model obtained by training in advance.
Optionally, the target information carries a target image, and the target image contains clothing;
the processor 300 is configured to perform the inputting, into the AI model trained in advance, the garment feature information corresponding to the target information, and includes:
identifying the target image, and acquiring explicit data and implicit data of a garment in the target image, wherein the garment characteristic information comprises the explicit data and the implicit data of the garment;
and inputting the explicit data and the implicit data of the clothing into a pre-trained AI model.
Optionally, the processor 300 is configured to perform the inputting, by using the processor, the clothing feature information corresponding to the target information into a pre-trained AI model, obtain a virtual person clothing model based on an output of the AI model, and include:
inputting the clothing feature information corresponding to the target information into an AI model obtained through training in advance, and outputting a 3D model of the virtual clothing and a texture map of the virtual clothing;
and obtaining a virtual person clothing model based on the 3D model of the virtual person clothing and the texture map of the virtual person clothing.
Optionally, the processor 300 is configured to obtain a virtual person clothing model based on the 3D model of the virtual person clothing and the texture map of the virtual person clothing, including:
performing mapping processing of clothing textures on the texture mapping of the virtual clothing based on the 3D model of the virtual clothing so as to perform texture mapping on the 3D model of the virtual clothing;
and determining the 3D model of the virtual person clothing after the mapping processing as a virtual person clothing model.
Optionally, the processor 300 is configured to perform the mapping process of the garment texture on the texture map of the virtual person garment based on the 3D model of the virtual person garment, including:
projecting each point on the 3D model surface of the virtual human garment to a 2D plane to obtain a projection area of the 2D plane;
and matching the coordinates of the texture map of the virtual human clothing with the projection area of the 2D plane so as to map the clothing texture of the texture map of the virtual human clothing.
Optionally, the processor 300 is further configured to perform:
and adapting and adjusting the 3D model of the virtual person clothing based on the predefined human body structure data of the virtual person.
According to the method and the device for generating the virtual clothing, the virtual clothing is obtained through the clothing feature information representing the clothing features of the virtual person carried in the target information input by the user, the generated virtual clothing can be changed in style and style according to the input of the user, the variety is achieved, and the virtual clothing generation effect can be improved.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the processes of the embodiment of the method described in fig. 1 are implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction, to implement each process of the embodiment of the method described in fig. 1, and to achieve the same technical effect, so that repetition is avoided, and no further description is given here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (10)

1. A method of virtual person garment generation, the method comprising:
receiving target information input by a user, wherein the target information carries clothing feature information for representing clothing features of a virtual person;
inputting clothing feature information corresponding to the target information into an artificial intelligent AI model obtained through training in advance, and obtaining a virtual person clothing model based on the output of the AI model;
and rendering the virtual person clothing model to obtain the virtual person clothing.
2. The method of claim 1, wherein the target information carries descriptive text information describing the garment characteristic information;
inputting the clothing feature information corresponding to the target information into an AI model obtained by training in advance, wherein the method comprises the following steps:
identifying and extracting the descriptive text information from the target information;
and inputting the descriptive text information into an AI model obtained by training in advance.
3. The method of claim 1, wherein the target information carries a target image, the target image comprising a garment;
inputting the clothing feature information corresponding to the target information into an AI model obtained by training in advance, wherein the method comprises the following steps:
identifying the target image, and acquiring explicit data and implicit data of a garment in the target image, wherein the garment characteristic information comprises the explicit data and the implicit data of the garment;
and inputting the explicit data and the implicit data of the clothing into a pre-trained AI model.
4. The method according to claim 1, wherein inputting the clothing feature information corresponding to the target information into a pre-trained AI model, obtaining a virtual person clothing model based on an output of the AI model, comprises:
inputting the clothing feature information corresponding to the target information into an AI model obtained through training in advance, and outputting a 3D model of the virtual clothing and a texture map of the virtual clothing;
and obtaining a virtual person clothing model based on the 3D model of the virtual person clothing and the texture map of the virtual person clothing.
5. The method of claim 4, wherein the obtaining a virtual person garment model based on the 3D model of the virtual person garment and a texture map of the virtual person garment comprises:
performing mapping processing of clothing textures on the texture mapping of the virtual clothing based on the 3D model of the virtual clothing so as to perform texture mapping on the 3D model of the virtual clothing;
and determining the 3D model of the virtual person clothing after the mapping processing as a virtual person clothing model.
6. The method of claim 5, wherein the mapping the garment texture to the texture map of the virtual person garment based on the 3D model of the virtual person garment comprises:
projecting each point on the 3D model surface of the virtual human garment to a 2D plane to obtain a projection area of the 2D plane;
and matching the coordinates of the texture map of the virtual human clothing with the projection area of the 2D plane so as to map the clothing texture of the texture map of the virtual human clothing.
7. The method of claim 5, wherein the method further comprises, prior to mapping the texture map of the virtual person garment to the garment texture based on the 3D model of the virtual person garment:
and adapting and adjusting the 3D model of the virtual person clothing based on the predefined human body structure data of the virtual person.
8. A virtual person garment creation apparatus, the apparatus comprising:
the receiving module is used for receiving target information input by a user, wherein the target information carries clothing characteristic information for representing clothing characteristics of the virtual person;
the acquisition module is used for inputting the clothing feature information corresponding to the target information into an artificial intelligent AI model which is obtained through training in advance, and obtaining a virtual person clothing model based on the output of the AI model;
and the rendering module is used for rendering the virtual person clothing model to obtain the virtual person clothing.
9. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the virtual person garment generation method of any one of claims 1-7.
10. A readable storage medium, characterized in that it has stored thereon a program or instructions which, when executed by a processor, implement the steps of the virtual person garment generation method according to any of claims 1-7.
CN202311734905.7A 2023-12-15 2023-12-15 Virtual human clothing generation method, device, equipment and medium Pending CN117710581A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311734905.7A CN117710581A (en) 2023-12-15 2023-12-15 Virtual human clothing generation method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311734905.7A CN117710581A (en) 2023-12-15 2023-12-15 Virtual human clothing generation method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN117710581A true CN117710581A (en) 2024-03-15

Family

ID=90149391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311734905.7A Pending CN117710581A (en) 2023-12-15 2023-12-15 Virtual human clothing generation method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117710581A (en)

Similar Documents

Publication Publication Date Title
CN111787242B (en) Method and apparatus for virtual fitting
CN112598785B (en) Method, device and equipment for generating three-dimensional model of virtual image and storage medium
WO2016177290A1 (en) Method and system for generating and using expression for virtual image created through free combination
WO2018029670A1 (en) System, device, and method of virtual dressing utilizing image processing, machine learning, and computer vision
CN109325450A (en) Image processing method, device, storage medium and electronic equipment
US20220327709A1 (en) Garment segmentation
US11734866B2 (en) Controlling interactive fashion based on voice
JP2022530710A (en) Image processing methods, devices, computer equipment and storage media
CN111767817B (en) Dress collocation method and device, electronic equipment and storage medium
CN109523345A (en) WebGL virtual fitting system and method based on virtual reality technology
CN109871589A (en) Intelligent clothing system and method based on Stereo face recognition
Liu et al. Psgan++: Robust detail-preserving makeup transfer and removal
CN113362263A (en) Method, apparatus, medium, and program product for changing the image of a virtual idol
US20160086365A1 (en) Systems and methods for the conversion of images into personalized animations
US20240096040A1 (en) Real-time upper-body garment exchange
CN116402580A (en) Method and system for automatically generating clothing based on input text/voice/picture
CN114285944B (en) Video color ring generation method and device and electronic equipment
CN116452291A (en) Virtual fitting method, virtual fitting device, electronic equipment and storage medium
WO2023121897A1 (en) Real-time garment exchange
CN117710581A (en) Virtual human clothing generation method, device, equipment and medium
JP2007026088A (en) Model creation apparatus
CN115147508B (en) Training of clothing generation model and method and device for generating clothing image
CN117557688B (en) Portrait generation model training method, device, computer equipment and storage medium
US20230316665A1 (en) Surface normals for pixel-aligned object
US20230316666A1 (en) Pixel depth determination for object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination